Hello author. Please make it export to unreal engine or blender with modelling data from cheap lidar or generated from 2d to 3D ai. Lot of indie filmmaker always want to do vfx. This skip a lot of things which only studio with millions of dollar could do afford to do. if a decent pc cn render whats on a virutal camera frame after importing it in unreal engine n it would revamp entire market n small player... Thank you in advance ❤.
Hello author, One beginner question. Why only cuda based ? Will it run on Ryzen 5 5800G and above using APU as VRAM ? As this guy showed how to run AI models on CPU! He showed Cuda based AI running on CPU and that results are almost equals to RTX 4080! Is it possible? ua-cam.com/video/H9oaNZNJdrw/v-deo.htmlsi=A0FjUtKPOUaERFJ2
@@crossybreedtypically the answer is, that the software development for cuda has been more mature for a longer time. That does not mean that cuda is 'stable' especially since its support typically requires recent hardware. And where cuda supported older hardware, more recent development does not support those cuda versions. In that respect AMD has done crazy things theirselves for example with (or not anymore) supporting OpenCL. And only recently HIP starts to appear, which (obviously) requires porting existing CUDA and OpenCL code. It is only very recently that you see academic work on github, before the code you saw in papers was never released. I don't think it is in the researchers best interest to make the most portable code, but in essence just prove that an idea works.
man i remember we learned the theory in the 90s... movement detection due to motion blur and calculate / recreate vectors and depths in a blackn white photo... but the computerpower was so limited we were not really able to proceed......... working on paper, lol, later volumetric clouds came in and now we have this.....this is truely awesome..... something i had dreamed of for decades, recreated my own 3d scenes for months, now its only a video and photos you put in an ai. i am totally flashed.
Man I don't even know where to start... I've been struggling with the submodules diff-gaussian-rasterization and simple-knn and have been combing through the GitHub comments about the same issue for hours... nothing seems to work. Do I just need to uninstall all components and start fresh? Seems to be a lot of confusion about which CUDA version to use and how to install torch.
Hey I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer. Please, help me if you have found a solution for this.
I saw the term Gaussian Splatt and looked it up. I came to this video and watched a few minutes of it. I understand that this chap is speaking English - I recognise some of the words - but honestly, on the whole it sounds like a sequence of random words "git, splat, hub, fork, repo". Great stuff!
I cannot thank you enough for this. I went from knowing basically nothing to a completed scene in 3 hours. It would have been much faster but I had the wrong CUDA version. You absolutely nailed this tutorial!
After 3 days trying I get!! I just want to say thank you for your tutorial!! I have some troubles and after solving that the results is really amazing!!
been seeing this everywhere, but nobody wanted to show how to do it! Thank you so much for this. I'm on my way to make some really awesome art. Keep it up!!
Hey this is very cool! Thank you a lot. I finally managed to generate my first Gaussian Splatting model with your instructions. Lot of steps in here but it finally works! Nice that you have made windows version to github from the original. Thank you! This is so intresting!
I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer.
This video is great! Thank you for being so generous with your knowledge about this cutting edge technology! ...and the command prompt tips were GOLD! :)
Thank you! I usually find the biggest hurdle for people to get into this tech is understanding command prompt. I find it funny that I struggle with Unreal Engine blueprints which is arguably easier because it's visual.
I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer.
Thank you very much for making this video! You're a great teacher, I had no problems following this and I'm very happy with how my first splat turned out.
Рік тому+24
I was getting errors with "diff-gaussian-rasterization" and "simple-knn" when running the command "conda env create --file environment.yml". I just wanted to let others with the same problem know what I did to solve it. I had to add "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64" to PATH in order for the build command to find cl.exe. And since the "conda env create" command was aborted before it was finished I had to start a new CMD window after adding to PATH and rerun the commands, ending with "conda activate gaussian_splatting" and from there I typed "conda env update --file environment.yml" to start the build process again. Hope it helps someone. 😊
I had the same issue and this tip didnt work, i got the exact same errors again. I have tried everything i could find to address the issues i am getting but it seems like im at a dead end here. Hope someone can make a more accessible version soon
Make sure to run this commands with Administrator rights after adding the path.@@lilmanpurse8603 activate pip install -e submodules/diff-gaussian-rasterization pip install -e submodules/simple-knn
Рік тому
@@lilmanpurse8603scroll down to the bottom of the git and check the common issues section. I saw that a more complete step by step instruction was there last time I checked. It might be a small step missed. Like not closing your command window in between changes to the PATH for instance. Hope you find a solution.
@@lilmanpurse8603 i have tried two ways to solve that, one is the adding of the VS path just this bro said above, and i tried to change the vision of my cudatoolkit into 11.8 from 12.3. I dont know which of these worked, but i solve it now. Very necessary to remind is that, after you make some changes in environment, restarting the computer may be very effective.😂😂
\gaussian_renderer\__init__.py", line 14, in from diff_gaussian_rasterization import GaussianRasterizationSettings, GaussianRasterizer ModuleNotFoundError: No module named 'diff_gaussian_rasterization' It showing me this error! tried to solve it but it isn't working... PLS help!
Bravo. I will give this a go with images captured with my Leica BLK3D and compare the result with the coloured textured mesh created from the same images in RealityCapture.
Correct me if I'm wrong, but it should be possible to convert any 3D blender scene into Gaussians because we already know the position of the virtual camera from the rendered 3D scene. So it should be possible to create photorealistic, interactive scenes pretty fast from 3D scenes created in blender from a few rendered images using cycles. This could be a revolution in gaming as well.
@@mattizzle81 I converted a 3D blender scene into Gaussians, and looks amazing. You can make movie production with a low en pc if you render de scene in eevee.
Not really convinced that a triangle based, photorealistic blender scene renders faster or better if you convert it to gaussians. It's just a different, more complex way to represent the same dataset. I would argue you actually lose performance and quality since you approximate the original geometry and shaders with particles and 4th order spherical harmonics.
you nailed it! thank you. had been led astray on other tuts and this one was very straight forward. I should drive all the confused viewers from other tuts this way.
I managed to get pretty cool results on a RTX2070S with just 5000 iterations. The training step was pretty fast too, under 10 minutes! Incredible technology. I can only imagine how much more incredible it's going to get in a couple of years. Thanks for the video, everything was explained very clearly.
I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer.
I get this while "Installing pip dependencies" Pip subprocess error: ERROR: Directory 'submodules/diff-gaussian-rasterization' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. failed CondaEnvException: Pip failed
Nice tutorial. I would like to give a suggestion. When showing webpage's or application it would be great if one could zoom in, so the text is bigger, so people with problematic sight could see what is on screen :) Thanks
we used to use this 13 years ago for 2D--3D streo conversion with polar axis. :) Now its realtime hardware based. It used to be software based on depth maps layed out manually. :) now the hardware auto creates depth maps that changes it.
Hi, I have a problem during Installing the Optimizer. When I run the second command ((conda env create --file environment.yml) it starts ok but then gives error while installing pip dependencies like below: (any help is appreciated) Pip subprocess output: Processing c:\users\tarek\gaussian-splatting\submodules\diff-gaussian-rasterization Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Processing c:\users\tarek\gaussian-splatting\submodules\simple-knn Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Building wheels for collected packages: diff_gaussian_rasterization, simple_knn Building wheel for diff_gaussian_rasterization (setup.py): started Building wheel for diff_gaussian_rasterization (setup.py): finished with status 'error' Running setup.py clean for diff_gaussian_rasterization Building wheel for simple_knn (setup.py): started Building wheel for simple_knn (setup.py): finished with status 'error' Running setup.py clean for simple_knn Failed to build diff_gaussian_rasterization simple_knn Installing collected packages: simple_knn, diff_gaussian_rasterization Running setup.py install for simple_knn: started Running setup.py install for simple_knn: finished with status 'error' Pip subprocess error: error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [22 lines of output]
During the process of using gaussian-splatting-Windows, when the image files originally in the 'input' folder are processed by python convert.py -s and output to the 'images' directory, they reduce in number from dozens to just a few, and these images now display a wide-angle effect. What could be the issue, and how should one go about troubleshooting this?
@@thenerfguru i think this wont work on my 12 year mid-end laptop im currently on it. 4g ram processor is b960 intel pentium gpu is amd radeon 7400M series or amd radeon 7400G dual. its windows 7 and also i optimized it as much as i can im using it already 9 years and it runs all perfectly IDK i just want to noclip through mp4 videos
Thank you for sharing this valuable tutorial! It was extremely informative and gave me a much stronger understanding of 3D Gaussian Splatting. However, while following the instructions carefully, I encountered a minor roadblock when trying to execute the command: python train.py -s Unfortunately, I came across the following error: Traceback (most recent call last): File "train.py", line 16, in from gaussian_renderer import render, network_gui File "C:\Users\wamr1\Documents\Proyectos\InstNerf\gaussian-splatting\gaussian_renderer\__init__.py", line 14, in from diff_gaussian_rasterization import GaussianRasterizationSettings, GaussianRasterizer ModuleNotFoundError: No module named 'diff_gaussian_rasterization' It seems there's an issue with the 'diff_gaussian_rasterization' module. Do you have any suggestions on how to resolve this issue? I greatly appreciate any further guidance you can provide!
@@Dima8D To address the error mentioned: Open your terminal and navigate to the location of the 'diff_gaussian_rasterization' module. Use the cd command to change to the correct directory. Once there, run python setup.py install and press Enter. This will install the necessary module. Repeat these steps for the 'simple-knn' folder.
Hi Jonathan!! that's awesome. thanks for your video. I followed carefully the whole process, but when i'm going to create the conda env, with the line: conda env create --file environment.yml i can't go on, cause i have an error message. I have the result of the process in cmd in a text file (if you need) , but in brief, i get something like this: Processing e: erf\gaussian-splatting\submodules\diff-gaussian-rasterization Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Processing e: erf\gaussian-splatting\submodules\simple-knn Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Building wheels for collected packages: diff-gaussian-rasterization, simple-knn Building wheel for diff-gaussian-rasterization (setup.py): started Building wheel for diff-gaussian-rasterization (setup.py): finished with status 'error' Running setup.py clean for diff-gaussian-rasterization Building wheel for simple-knn (setup.py): started Building wheel for simple-knn (setup.py): finished with status 'error' Running setup.py clean for simple-knn Failed to build diff-gaussian-rasterization simple-knn Installing collected packages: simple-knn, diff-gaussian-rasterization Running setup.py install for simple-knn: started Running setup.py install for simple-knn: finished with status 'error' I don't know wwhere the problem could coming, and don't know if you can help me a little with this. Anyway, thanks for your time and for the awesome video.
You need Microsoft Visual C++ 14.0 or greater installed to compile the modules. Ran into the same issue. Install that, restart the computer, delete the conda environment and try again from scratch.
It feels like just yesterday when I tried out Nvidia nerfs and was disappointed by it's limited use but now here we are with visuals rendering out in real time smoothly :o
Thank you so much, this is amazing! I'm having trouble with the training process. The error message says "ModuleNotFoundError: No module named 'diff_gaussian_rasterization'." What should I do?
I also have an issue with incomplete installation, which has been bothering me for several days. The settings of my system are the same as the ones shown in the video. Please keep me informed if there's any progress.
Have any of you saved the logs of what happened? The install of diff-Gaussian-rasterization failed…and probably Simple-knn. If I saw the compilation code, I could help.
Can't install the conda env: Pip subprocess output: Pip subprocess error: ERROR: Directory 'submodules/simple-knn' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. failed CondaEnvException: Pip failed And it seams that those folders are empty in the repo as well, so I don't know how to fix this
As someone who has struggled with the installation, the magick lies in cuda 11.7 . it does not work with 11.6. it does not work with 11.8. but with 11.7 perfect. Everything installed fine.
Awsome Video! We are getting this message while training the model: Tensorboard not available: not logging progress [30/09 12:35:06] Reading camera 990/990 [30/09 12:35:15] Loading Training Cameras [30/09 12:35:15] [ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K. If this is not desired, please explicitly specify '--resolution/-r' as 1 [30/09 12:35:15] . other then that it seems like it is busy because it not allowing me to add a new cmd. Do you know how to resolve this?
Does it matter that if has cudatoolkit 11.6 in the dependencies list of your environment.yml? I'm still having trouble installing Optimiser Pip dependecies always fail, nothing on Git yet to help solve
Hello, I have encountered some minor trouble. Could you please advise me on how to resolve this issue when I display "colmap is not an internal or external command, nor is it a runnable program or command" while running the program. Excuse me for bothering you
You need to install COLMAP and add it to path. I cover how to do that in this video at the 21 minute mark: ua-cam.com/video/LhAa1B9CFeY/v-deo.htmlsi=RhOj9xDjuhbzmUYv
During the 'conda env create --file environment.yml' step, i have this error appearing.. "Pip subprocess error: ERROR: Directory 'submodules/diff-gaussian-rasterization' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. failed CondaEnvException: Pip failed' Can anyone advise me on what to do?
yes, this solve my error related "diff_gaussian_rasterization" (1) reinstall VIsual Studio 2022, install more in modify (2) run ananconda : conda activate gaussian_splatting cd /gaussian-splatting pip install submodules\diff-gaussian-rasterization pip install submodules\simple-knn thank you @@thenerfguru
For those who have problems with the right CUDA version, my advice is downgrade your Visual Studio to 2019, since NOT all versions of the visual studio 2022 have the supported MSC_VER for CUDA 12.x or below
Hi. At first very thankful for your tutorial. But there are some failures showed up in my computer. "Pip subprocess output: Pip subprocess error: ERROR: Directory 'submodules/diff-gaussian-rasterization' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. WARNING: There was an error checking the latest version of pip. failed CondaEnvException: Pip failed" plz help me.
Without seeing the whole error, it's hard to know exactly the problem. The project was not built successfully. This has fixed most people's issue: github.com/graphdeco-inria/gaussian-splatting/issues/146
Hey man, I'm stuck on the very last step. I did the training and everything says it was a success, but when I try to run the viewer I get could not find config file 'cfg_args'. But its clearly in the output folder
That is super odd. Personally, I am not a fan of the viewer. I suggest using Nerfstudio or Unity for viewing. I made videos on both: Nerfstudio: ua-cam.com/video/A1Gbycj0bWw/v-deo.htmlsi=ZEZLnnJTkXrSn52I Unity: ua-cam.com/video/5_GaPYBHqOo/v-deo.htmlsi=dremDVE1h9L3KgD0
Thanks for the video and managed to get it working. As someone who is new to this radiance field rendering.. may I ask is there a way to use this data and convert it into a 3d object? Like 3d scanning? What uses/application can I use this data for besides viewing it with a viewer? Thanks
If your goal is 3D modeling and not novel view synthesis, I suggest some other method such as Neuralangelo. Meshes was not the intended goal of this project.
Well, I ran the optimizer, and I got a problem. Encountered error while trying to install package. -> simple-knn Further up it reads, ERROR: failed building wheel for simple-knn. Very peculiar. Not sure where to go from here.
Without seeing the whole error, it's hard to know exactly the problem. This has fixed most people's issue around building the project: github.com/graphdeco-inria/gaussian-splatting/issues/146
@@thenerfguru I managed to snoop around for a solution. Thank you for replying! I haven't tried anything yet but I'll know where to go if things go wrong.
Building wheel for diff_gaussian_rasterization (setup.py): finished with status 'error' --> Always this error is coming while running the --> conda env create --file environment.yml
How would one go about cleaning up the point cloud (PLY) files manually in order to reduce their size or the some noise around critical objects in the scene for example? Blender seems to be able to open them fine, but I'm worried there might be some vertex color information or other type of metadata that it deletes when importing or re-exporting them.
Fantastic stuff! Would this work using a colourised 3D point cloud obtained from a LiDAR scanner as well? If so, what point cloud file formats are produced in your workflow?
A point cloud is only used for initialization of training the scene. You would still need a lot of source images with parallax movement. The output is a ply file. It takes a special renderer to display them accurately.
@@thenerfguru Thanks for this information. Any good tools you're aware of to mesh massive 3D point clouds (5-25 billion colorized 3D points)? I found the non-ml based approaches rather underwhelming.
Thanks for all your work (also on your older vids!)! I am running into a problem creating the conda env. It found conflicts, lists them and then fails to create the env. I then tried doing it manually but ran into the same problems. Any suggestions on how to start fresh? Tried un/reinstalling anaconda but that didn't fix it.. Thx again!
My guess it’s the build part of the conda environment. Not the dependencies. I added troubleshooting to the bottom of the GitHub page with a couple common fixes.
@@thenerfguru Thank you for the quick reply! I messed up.. I also had VS2022 installed but didn't set the checkmark at the Desktop Development with C++ option for it, only for VS2019. After that it got a little further, but got stuck again. Then i also added the path to the VS2022 cl.exe (like in your troubleshooting notes). Installed the pip packages manually and succesfully. Thanks again!
Can we use this as an alternative to Nerf (or in combination with) to convert video/images into "nicer" 3d models, addressing downsides of nerfs (garbled geometry, muddy "textures")? Certain angles certainly look better compared to nerfs output (even stunning so), but I don't quite understand if these "gaussian points" can enable geometry estimation or provide better point colorss? "Dumb" way that comes to my mind is to produce video/lots of pictures straight from 3d gausiann viewer (hoping to get the perfectly consistent stabilisation/angles/image quality compared to original video) and feed them into Nerfstudio, but I am sure this is not an optimal approach 😅
Hi Jonathan, I'm having trouble at the Installing the Optimizer. When entering the conda env create --file environmnet.yml in cmd. I get the following error 'conda' is not recognized as an internal or external command, operable program or batch file. Any ideas?
Are there limits to the size of the dataset? I have experiences NeRFs falling apart after beeing fed too many images. Have you heard of or experienced any limits of GS yourself?
@thenerfguru hi, I have an idea that would utilise differentiable gaussian splatting, and I have a couple of questions you might be able to answer - would save me tons of time trying to figure it out myself. 1) Is orthographic camera available, in addition to perspective camera? 2) Currently the splats disappear when they are close to the camera (which makes sense for your use case). Would it be possible to disable this? If both are true, it would be possible to "slice" through a set of splats, which might have interesting use cases. Vertex representation provided by pytorch3d is unfortunately unsuitable for my purposes...
Dear author, hello. Regarding your Gaussian pasting, I would like to ask, actually, I want to use it to export 3D models. Yes, I see that the generated effect is very realistic, and I want to export the characters or objects in the scene separately to 3D formats such as OBJ for my own use. Can I do this?
Jonathan, thanks for doing this. As the data is effectively pointcloud based, can a wireframe mesh be made from the source data along the lines of poisson or similar methodology? Also, what lighting data do we get from the result-- can this environment be used to calculate a 3D radiance (if not raytrace) model? Very exciting stuff-- apologies if you've gone into detail on this elsewhere.
Great questions. You could always run the imagery in a photogrammetry suite to get a mesh, also, something like SDFStudio would be a great option. All of the lighting is baked in. No one has really addressed the issue yet.
@@thenerfguru hrmmm.. you could, but if the GASP has already created a point cloud, it lives within a coordinate system so getting a 1:1 parity would be ideal.
I haven't been able to progress past the 18th minute, and I've been encountering errors for two days. I've started from scratch three times ;( Pip subprocess error: ERROR: Directory 'submodules/diff-gaussian-rasterization' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. failed CondaEnvException: Pip failed
Awesome video! Any suggestions where you can get high quality drone footage like this (following a similar orbit path)? I'm performing research around various NeRF / gsplat models
Hi, after starting training iI always get this line: Loading Training Cameras [30/09 14:41:23] [ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K. If this is not desired, please explicitly specify '--resolution/-r' as 1 [30/09 14:41:23] Loading Test Cameras [30/09 14:43:54] Number of points at initialisation : 172044 [30/09 14:43:54] This part (INFO) Encountered quite large input images... Should I resize them or just leave as it is? Thanks.
good, I had an error when I used the python train.py -s line Traceback (most recent call last): File "C:\Users\IvoJr\gaussian-splatting\train.py", line 13, in import torch ModuleNotFoundError: No module named 'torch' and I don't know how to solve it, can you help me?
hey Jonathan! First off, thanks again for posting this. This is incredible. Much appreciated. So I'm at the install for the gaussian_splatting environment in anaconda (also, when i launch a regular cmd I dont see the (base). I have to launch an anaconda prompt. I'm assuming thats fine as it's just the assigned terminal for the Anaconda environment?) So when I ran the second prompt, it did it's thing and then it gave me a "failed". I did a restart like you suggested, however I forgot to copy and paste the error in a terminal. Now when trying to run the second command again I get an error that the prefix already exists so it's clearly trying to write over the gaussian_splatting environment thats been built already. But how do I know if it's missing part of the installation? Should I just continue with the third step? Thanks so much man.
No problem if you don't see base. That's just me using Anaconda Prompt which launches in the base conda environment. It sounds like your environment was built, but the submodules failed to install. You can install them manually. Most likely this was your issue (solution in the thread): github.com/graphdeco-inria/gaussian-splatting/issues/146
@@thenerfguru It looks like that may have done the trick Jon! Thanks so much. One last thing, is there a link tot he crane video just to try to duplicate the results at a first go? Since I'm new to these it would be great to have an asset that is done correctly to compare against for models that dont work so well. Thanks so much again. Really amazing work.
Hi! Great project! How about adding support for EXR?))) It would be a fantastic opportunity to create HDR projects, such as those for cinema, for example. I just filming a low-budget historical movie, and I really appreciate your software!
It seems like the simple KNN repo is down, used for most of these Gaussian splatting Repos. So its very hard to follow any install instructions. Also, it seems like Torch is no longer available in the CONDA default sites anymore.
Wow!! this is really amazing. I'm new to Radiance Field methods and this is mind blowing. It's the future of all audiovisual industry. Also your tutorial is so incredibly detailed and well explained. Unfortunately I reached a point from which I couldn't continue because I work on Mac. Do you know if there's any resource where we could learn how to use gaussian splatting on an M1max macbook? Thank you very much in advance
Is there a way to export a LAS/LAZ file or some other format for the pointcloud that this generates? At a glance it looked like the export option was for exporting a video; I assume that's a video recording of the user moving around in the environment?
Amazing tutorial. I've been looking forward to this. Unfortunately, the 24 GB VRAM requirement will keep me out of the game for a couple of years, or until the memory requirements are reduced. Anyway, thanks a lot!
It's currently an RTX 2060 with laughable 6 GB of VRAM. I'm buying a 4070 with 12 GB soon, but that will still be only half of the suggested minimum. Just for the experience, I started training a 120-image dataset (downscaled to FullHD resolution) and, to my surprise, I got no immediate out-of-memory errors. It started training relatively fast, but soon speed dropped to about 1 it/s. It's still training, but just too slow to be feasible.@@thenerfguru
So, I've decided to let it run for as long as it needs. It's at about 5k iterations now, going at about 3 it/s. I'll let you know if it manages to go all the way.
Final update: after many hours, training speed dropped to 0.1 it/s (10 s/it), which made me pull the plug. I got as far as 5600 iterations. I'll try again when I get the 12 GB 4070.
@@DGFig Hmmm. I am sorry to hear about that! Did you change any of the processing parameters or just saw how far it would go? Someone is working on a super fast training version of this that still needs more VRAM, however, you can modify the training variables to use less.
very cool tutorial, appreciate you made this. I am also curious whether this can be used to the video/image sequences that contains human actions, like dancing or performance, instead of static pose or static objects.
Hi! Thanks! If I have 2 iphones filming static videos from different angles is it possible to combine the fuutages to create 1 scene with better quality? What about movement? Is it possible to animate gaussian splatting scene?
When I try to compile it I get errors saying that the code has issues. I have gcc and vs code. It is failing to build the wheel for diff-Gaussian-rasterization.
Awesome video. I tried installing nerf studio on Linux, but could not get it to run. Are you aware of any "ready to go" docker container for nerf studio?
Hello, and thank you for the instructions! I keep getting an error message while attempting to create the environment. "Failed to build diff-gaussian-rasterization simple-knn" also "Pip subprocess error: error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [21 lines of output] and a few more underneath those.. any advice? thanks
Hey, I am getting a colmap error, "[option_manager.cc:815] Check failed: ExistsDir(*image_path) ERROR: Invalid options provided. ERROR:root:Feature extraction failed with code 1. Exiting." I tried following your instructions and using the full directory path which neither worked. Would appreciate any guidance.
Hey, thank you for an amazing tutorial ! I got onto this issue while opening the viewer. [SIBR] ## ERROR ##: FILE C:\projects\gauss2\SIBR_viewers\src\projects\gaussianviewer\apps\gaussianViewer\main.cpp LINE 140, FUNC main Could not find config file 'cfg_args' at C:\Users\ do you know how to fix it ?
Is there a downside to allowing it to autoscale my images? I have my drone imagery in 4k resolution but it wants to default to 1600 width instead of 2160 (I know I can set r1) Basically I am asking should I force native resolution or is it a waste
any chance to use these splats in UE? Would be a killer feature for Virtual Production in a LED cave, no more photogrammetry for backgrounds. This method is faster and simpler.
@@thenerfguru that sounds awesome! I am not partial to any game engine, blender would work too! I am just trying to integrate these some how! It feels like nerfs in the sense I know this will be something in the future i just don't know what!
@@thenerfguru Hi , at that last phase where you're viewing the splat view in the viewer, is it not possible to 'save as' and save the .splat file to disk? There are some (experimental/nascent) viewers (notably one in JS/THREEJS) that can ingest splat files..
Hi, one of the authors here, this is awesome, thanks so much for doing this! Is it ok if we link this on the Github page?
Absolutely!
Thanks for creating the project! I love where the technology is headed.
Hello author. Please make it export to unreal engine or blender with modelling data from cheap lidar or generated from 2d to 3D ai.
Lot of indie filmmaker always want to do vfx. This skip a lot of things which only studio with millions of dollar could do afford to do.
if a decent pc cn render whats on a virutal camera frame after importing it in unreal engine n it would revamp entire market n small player... Thank you in advance ❤.
Hello author,
One beginner question.
Why only cuda based ? Will it run on Ryzen 5 5800G and above using APU as VRAM ?
As this guy showed how to run AI models on CPU! He showed Cuda based AI running on CPU and that results are almost equals to RTX 4080! Is it possible?
ua-cam.com/video/H9oaNZNJdrw/v-deo.htmlsi=A0FjUtKPOUaERFJ2
@@crossybreedtypically the answer is, that the software development for cuda has been more mature for a longer time. That does not mean that cuda is 'stable' especially since its support typically requires recent hardware. And where cuda supported older hardware, more recent development does not support those cuda versions. In that respect AMD has done crazy things theirselves for example with (or not anymore) supporting OpenCL. And only recently HIP starts to appear, which (obviously) requires porting existing CUDA and OpenCL code. It is only very recently that you see academic work on github, before the code you saw in papers was never released. I don't think it is in the researchers best interest to make the most portable code, but in essence just prove that an idea works.
man i remember we learned the theory in the 90s... movement detection due to motion blur and calculate / recreate vectors and depths in a blackn white photo... but the computerpower was so limited we were not really able to proceed......... working on paper, lol, later volumetric clouds came in and now we have this.....this is truely awesome..... something i had dreamed of for decades, recreated my own 3d scenes for months, now its only a video and photos you put in an ai. i am totally flashed.
In theory, this definitely wasn’t new! Now we have the compute hardware, we live in an amazing time.
Man I don't even know where to start... I've been struggling with the submodules diff-gaussian-rasterization and simple-knn and have been combing through the GitHub comments about the same issue for hours... nothing seems to work. Do I just need to uninstall all components and start fresh? Seems to be a lot of confusion about which CUDA version to use and how to install torch.
IT worked first try, this is incredible. I have a 3d cloud model of my living room forever. Great tutorial, super detailed.
So awesome to hear that! People usually struggle to capture rooms.
Hey
I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer.
Please, help me if you have found a solution for this.
I saw the term Gaussian Splatt and looked it up. I came to this video and watched a few minutes of it. I understand that this chap is speaking English - I recognise some of the words - but honestly, on the whole it sounds like a sequence of random words "git, splat, hub, fork, repo". Great stuff!
I cannot thank you enough for this. I went from knowing basically nothing to a completed scene in 3 hours.
It would have been much faster but I had the wrong CUDA version.
You absolutely nailed this tutorial!
Thank you!
Which version did you have and which you did you need ?
After 3 days trying I get!! I just want to say thank you for your tutorial!! I have some troubles and after solving that the results is really amazing!!
been seeing this everywhere, but nobody wanted to show how to do it! Thank you so much for this. I'm on my way to make some really awesome art. Keep it up!!
Godspeed! If you post anything on social, tag me and I’ll reshare it. I’m fairly active on LI and X
Hey this is very cool! Thank you a lot. I finally managed to generate my first Gaussian Splatting model with your instructions. Lot of steps in here but it finally works! Nice that you have made windows version to github from the original. Thank you! This is so intresting!
Thanks for the comment! Have fun with it!
I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer.
This video is great! Thank you for being so generous with your knowledge about this cutting edge technology! ...and the command prompt tips were GOLD! :)
Thank you! I usually find the biggest hurdle for people to get into this tech is understanding command prompt. I find it funny that I struggle with Unreal Engine blueprints which is arguably easier because it's visual.
I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer.
Thank you very much for making this video! You're a great teacher, I had no problems following this and I'm very happy with how my first splat turned out.
I was getting errors with "diff-gaussian-rasterization" and "simple-knn" when running the command "conda env create --file environment.yml". I just wanted to let others with the same problem know what I did to solve it.
I had to add "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64" to PATH in order for the build command to find cl.exe.
And since the "conda env create" command was aborted before it was finished I had to start a new CMD window after adding to PATH and rerun the commands, ending with "conda activate gaussian_splatting" and from there I typed "conda env update --file environment.yml" to start the build process again. Hope it helps someone. 😊
I had the same issue and this tip didnt work, i got the exact same errors again. I have tried everything i could find to address the issues i am getting but it seems like im at a dead end here. Hope someone can make a more accessible version soon
Make sure to run this commands with Administrator rights after adding the path.@@lilmanpurse8603
activate
pip install -e submodules/diff-gaussian-rasterization
pip install -e submodules/simple-knn
@@lilmanpurse8603scroll down to the bottom of the git and check the common issues section. I saw that a more complete step by step instruction was there last time I checked. It might be a small step missed. Like not closing your command window in between changes to the PATH for instance. Hope you find a solution.
@@lilmanpurse8603 i have tried two ways to solve that, one is the adding of the VS path just this bro said above, and i tried to change the vision of my cudatoolkit into 11.8 from 12.3. I dont know which of these worked, but i solve it now.
Very necessary to remind is that, after you make some changes in environment, restarting the computer may be very effective.😂😂
thank you so much!
the coolest advancement in neural rendering! amazing stuff man
I agree! It is so amazing because it renders in real-time and looks crazy detailed.
@@thenerfguru the first game engine to utilize this technology will make billions. Gaben take notes
This is going to change the world.
\gaussian_renderer\__init__.py", line 14, in
from diff_gaussian_rasterization import GaussianRasterizationSettings, GaussianRasterizer
ModuleNotFoundError: No module named 'diff_gaussian_rasterization'
It showing me this error! tried to solve it but it isn't working... PLS help!
Bravo. I will give this a go with images captured with my Leica BLK3D and compare the result with the coloured textured mesh created from the same images in RealityCapture.
Share the results! Completely different outputs and visualization techniques.
I would love to see it. Do you have LinkedIN or something?
Correct me if I'm wrong, but it should be possible to convert any 3D blender scene into Gaussians because we already know the position of the virtual camera from the rendered 3D scene. So it should be possible to create photorealistic, interactive scenes pretty fast from 3D scenes created in blender from a few rendered images using cycles. This could be a revolution in gaming as well.
Indeed you could render some pretty computationally expensive scenes and convert it to this format.
@@mattizzle81 I converted a 3D blender scene into Gaussians, and looks amazing. You can make movie production with a low en pc if you render de scene in eevee.
Not really convinced that a triangle based, photorealistic blender scene renders faster or better if you convert it to gaussians. It's just a different, more complex way to represent the same dataset. I would argue you actually lose performance and quality since you approximate the original geometry and shaders with particles and 4th order spherical harmonics.
Use cycles for offline rendering. GS for realtime rendering with the lights and transparency baked in.@@robertmayster7863
you nailed it! thank you. had been led astray on other tuts and this one was very straight forward. I should drive all the confused viewers from other tuts this way.
I managed to get pretty cool results on a RTX2070S with just 5000 iterations. The training step was pretty fast too, under 10 minutes! Incredible technology. I can only imagine how much more incredible it's going to get in a couple of years. Thanks for the video, everything was explained very clearly.
Glad it worked well for you! I bet you could run it all to at least 7000 iters as well.
I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer.
I get this while "Installing pip dependencies"
Pip subprocess error:
ERROR: Directory 'submodules/diff-gaussian-rasterization' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
failed
CondaEnvException: Pip failed
I've a 3090 TI, I feel blessed that I'll be able to mess with this tech. Tx a lot for hte tutorial.
You’re welcome!
Amazing job guys! As a surveyor i know photogrammetry and this is another level ♥
🙌
Nice tutorial. I would like to give a suggestion. When showing webpage's or application it would be great if one could zoom in, so the text is bigger, so people with problematic sight could see what is on screen :)
Thanks
Great suggestion!
we used to use this 13 years ago for 2D--3D streo conversion with polar axis. :) Now its realtime hardware based. It used to be software based on depth maps layed out manually. :) now the hardware auto creates depth maps that changes it.
Hi, I have a problem during Installing the Optimizer. When I run the second command ((conda env create --file environment.yml) it starts ok but then gives error while installing pip dependencies like below: (any help is appreciated)
Pip subprocess output:
Processing c:\users\tarek\gaussian-splatting\submodules\diff-gaussian-rasterization
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Processing c:\users\tarek\gaussian-splatting\submodules\simple-knn
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Building wheels for collected packages: diff_gaussian_rasterization, simple_knn
Building wheel for diff_gaussian_rasterization (setup.py): started
Building wheel for diff_gaussian_rasterization (setup.py): finished with status 'error'
Running setup.py clean for diff_gaussian_rasterization
Building wheel for simple_knn (setup.py): started
Building wheel for simple_knn (setup.py): finished with status 'error'
Running setup.py clean for simple_knn
Failed to build diff_gaussian_rasterization simple_knn
Installing collected packages: simple_knn, diff_gaussian_rasterization
Running setup.py install for simple_knn: started
Running setup.py install for simple_knn: finished with status 'error'
Pip subprocess error:
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [22 lines of output]
During the process of using gaussian-splatting-Windows, when the image files originally in the 'input' folder are processed by python convert.py -s and output to the 'images' directory, they reduce in number from dozens to just a few, and these images now display a wide-angle effect. What could be the issue, and how should one go about troubleshooting this?
Been looking forward to this, thank you!
You're welcome!
@@thenerfguru i think this wont work on my 12 year mid-end laptop
im currently on it.
4g ram
processor is b960 intel pentium
gpu is amd radeon 7400M series or amd radeon 7400G dual.
its windows 7
and also i optimized it as much as i can
im using it already 9 years and it runs all perfectly
IDK i just want to noclip through mp4 videos
Thank you for sharing this valuable tutorial! It was extremely informative and gave me a much stronger understanding of 3D Gaussian Splatting. However, while following the instructions carefully, I encountered a minor roadblock when trying to execute the command: python train.py -s
Unfortunately, I came across the following error:
Traceback (most recent call last):
File "train.py", line 16, in
from gaussian_renderer import render, network_gui
File "C:\Users\wamr1\Documents\Proyectos\InstNerf\gaussian-splatting\gaussian_renderer\__init__.py", line 14, in
from diff_gaussian_rasterization import GaussianRasterizationSettings, GaussianRasterizer
ModuleNotFoundError: No module named 'diff_gaussian_rasterization'
It seems there's an issue with the 'diff_gaussian_rasterization' module. Do you have any suggestions on how to resolve this issue? I greatly appreciate any further guidance you can provide!
Same thing, have you find the solution?
same problem here...
@@Dima8D To address the error mentioned:
Open your terminal and navigate to the location of the 'diff_gaussian_rasterization' module. Use the cd command to change to the correct directory.
Once there, run python setup.py install and press Enter. This will install the necessary module.
Repeat these steps for the 'simple-knn' folder.
Imagine capturing images for google street view using this technique 😃 I won't leave home ever again
Hi Jonathan!! that's awesome. thanks for your video. I followed carefully the whole process, but when i'm going to create the conda env, with the line:
conda env create --file environment.yml
i can't go on, cause i have an error message. I have the result of the process in cmd in a text file (if you need) , but in brief, i get something like this:
Processing e:
erf\gaussian-splatting\submodules\diff-gaussian-rasterization
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Processing e:
erf\gaussian-splatting\submodules\simple-knn
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Building wheels for collected packages: diff-gaussian-rasterization, simple-knn
Building wheel for diff-gaussian-rasterization (setup.py): started
Building wheel for diff-gaussian-rasterization (setup.py): finished with status 'error'
Running setup.py clean for diff-gaussian-rasterization
Building wheel for simple-knn (setup.py): started
Building wheel for simple-knn (setup.py): finished with status 'error'
Running setup.py clean for simple-knn
Failed to build diff-gaussian-rasterization simple-knn
Installing collected packages: simple-knn, diff-gaussian-rasterization
Running setup.py install for simple-knn: started
Running setup.py install for simple-knn: finished with status 'error'
I don't know wwhere the problem could coming, and don't know if you can help me a little with this.
Anyway, thanks for your time and for the awesome video.
same problem here unfortunately
You need Microsoft Visual C++ 14.0 or greater installed to compile the modules. Ran into the same issue.
Install that, restart the computer, delete the conda environment and try again from scratch.
It feels like just yesterday when I tried out Nvidia nerfs and was disappointed by it's limited use but now here we are with visuals rendering out in real time smoothly :o
wow thats a lot of vram XD
@@Thats_Cool_Jack The requirement is dropping fast.
Really COOOOL video! Thanks for the guide and I successfully made my own 3D model!
Very detailed instructional video, love from new beginner. :)
Thank you so much, this is amazing! I'm having trouble with the training process. The error message says "ModuleNotFoundError: No module named 'diff_gaussian_rasterization'." What should I do?
And I actually see it here: gaussian-splatting\submodules\diff-gaussian-rasterization 😶🌫
Did you resolve it? This may be a good question to ask in the main GitHub repo.
I also have an issue with incomplete installation, which has been bothering me for several days. The settings of my system are the same as the ones shown in the video. Please keep me informed if there's any progress.
I'm stuck at same error.
Have any of you saved the logs of what happened? The install of diff-Gaussian-rasterization failed…and probably Simple-knn. If I saw the compilation code, I could help.
Can't install the conda env:
Pip subprocess output:
Pip subprocess error:
ERROR: Directory 'submodules/simple-knn' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
failed
CondaEnvException: Pip failed
And it seams that those folders are empty in the repo as well, so I don't know how to fix this
Ok this error is because i haven't uset clone command from repo read me text but used zipp download and https comand, and that doesn't work
@@Zvezdan88 Yea, always use the git pull! Then, you can easily pull new updates and upgrade.
Very Helpful Guide. Thank you so much for making this video and saving a lot of time for others trying to understand and install the program :D
Glad it helped!
As someone who has struggled with the installation, the magick lies in cuda 11.7 . it does not work with 11.6. it does not work with 11.8. but with 11.7 perfect. Everything installed fine.
I was able to do everything with 11.8, with the exception of the conversion which required I turn off gpu. Your fix seems cleaner though
@@patrickjdarrow Why did u have to turn off the gpu and how you found that out
Awsome Video! We are getting this message while training the model: Tensorboard not available: not logging progress [30/09 12:35:06]
Reading camera 990/990 [30/09 12:35:15]
Loading Training Cameras [30/09 12:35:15]
[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1 [30/09 12:35:15] .
other then that it seems like it is busy because it not allowing me to add a new cmd.
Do you know how to resolve this?
Does it matter that if has cudatoolkit 11.6 in the dependencies list of your environment.yml? I'm still having trouble installing Optimiser Pip dependecies always fail, nothing on Git yet to help solve
What does the failure say? Have you tried asking for help on the main GitHub project?
I also did see that. If you have 11.8, you should already be satisfying the requirement and it will skip it.
Hello, I have encountered some minor trouble. Could you please advise me on how to resolve this issue when I display "colmap is not an internal or external command, nor is it a runnable program or command" while running the program. Excuse me for bothering you
You need to install COLMAP and add it to path. I cover how to do that in this video at the 21 minute mark: ua-cam.com/video/LhAa1B9CFeY/v-deo.htmlsi=RhOj9xDjuhbzmUYv
Thank you very much.
During the 'conda env create --file environment.yml' step, i have this error appearing..
"Pip subprocess error:
ERROR: Directory 'submodules/diff-gaussian-rasterization' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
failed
CondaEnvException: Pip failed'
Can anyone advise me on what to do?
Does this solve your issue? github.com/graphdeco-inria/gaussian-splatting/issues/146
yes, this solve my error related "diff_gaussian_rasterization"
(1) reinstall VIsual Studio 2022, install more in modify
(2) run ananconda : conda activate gaussian_splatting
cd /gaussian-splatting
pip install submodules\diff-gaussian-rasterization
pip install submodules\simple-knn
thank you
@@thenerfguru
Much thanks. Will be trying this out tomorrow.
Great! Let me know how it goes!
For those who have problems with the right CUDA version, my advice is downgrade your Visual Studio to 2019, since NOT all versions of the visual studio 2022 have the supported MSC_VER for CUDA 12.x or below
Wow amazing, this is great work! I am looking forward of applying this technology on google map's street view.
I'm sure this technology or similar will be integrated into their new Immersive View.
Heyya! Thanks for providing such a great tutorial video. I wonder if you have no video for MacOS?
you need to set up the enviornment inside the anaconda prompt. cmd didn't work for me
I ran into this one on machine, but not my other. I don’t know why 🤷♂️
Hi, Great tutorial, thanks a lot!!!
Is there any explanation how to implement CUDA12? Didn't find it in repo
Hi. At first very thankful for your tutorial.
But there are some failures showed up in my computer.
"Pip subprocess output:
Pip subprocess error:
ERROR: Directory 'submodules/diff-gaussian-rasterization' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
WARNING: There was an error checking the latest version of pip.
failed
CondaEnvException: Pip failed"
plz help me.
Same problem
Without seeing the whole error, it's hard to know exactly the problem. The project was not built successfully. This has fixed most people's issue: github.com/graphdeco-inria/gaussian-splatting/issues/146
soo good man running perfectly. Thanks for the video
You're welcome!
Hey man, I'm stuck on the very last step. I did the training and everything says it was a success, but when I try to run the viewer I get could not find config file 'cfg_args'. But its clearly in the output folder
That is super odd. Personally, I am not a fan of the viewer. I suggest using Nerfstudio or Unity for viewing. I made videos on both:
Nerfstudio: ua-cam.com/video/A1Gbycj0bWw/v-deo.htmlsi=ZEZLnnJTkXrSn52I
Unity: ua-cam.com/video/5_GaPYBHqOo/v-deo.htmlsi=dremDVE1h9L3KgD0
Getting the same error here..
Thanks for the video and managed to get it working.
As someone who is new to this radiance field rendering.. may I ask is there a way to use this data and convert it into a 3d object? Like 3d scanning?
What uses/application can I use this data for besides viewing it with a viewer? Thanks
If your goal is 3D modeling and not novel view synthesis, I suggest some other method such as Neuralangelo. Meshes was not the intended goal of this project.
Thank you, Jonathan! It works and it causes a storm of emotions in me. Just crazy!)
Well, I ran the optimizer, and I got a problem.
Encountered error while trying to install package.
-> simple-knn
Further up it reads, ERROR: failed building wheel for simple-knn.
Very peculiar. Not sure where to go from here.
Without seeing the whole error, it's hard to know exactly the problem. This has fixed most people's issue around building the project: github.com/graphdeco-inria/gaussian-splatting/issues/146
@@thenerfguru I managed to snoop around for a solution. Thank you for replying!
I haven't tried anything yet but I'll know where to go if things go wrong.
I don't have the machine for this, but I immediately want to see what one of these splatted scenes look like in a VR headset. 😁
Keep an eye out for VR enabled 3D Gaussian Splats. Plus, it takes a lot less GPU power to view these. Training is the VRAM bottleneck
I just ran it with a 3070 GPU with 8GB
@@thenerfguruI can’t wait! What res do you expect to be able to pull off, say on a Quest 2 using Steam vs HMD only?
@@olgaforce6678 Did you manage to train a model with a 3070?
@@olgaforce6678how did it go?
Thank you very much for the video, it helped me a lot. Thank you for your generous sharing!
Building wheel for diff_gaussian_rasterization (setup.py): finished with status 'error' --> Always this error is coming while running the --> conda env create --file environment.yml
when I enter this command "ffmpeg -i {path to video} -qscale:v 1 -qmin 1 -vf fps={frame extraction rate} %04d.jpg" it tells me "permission denied"
I have the same problem, did you find any solution? Thanks
I was here before it went mainstream. It's awesome.
How would one go about cleaning up the point cloud (PLY) files manually in order to reduce their size or the some noise around critical objects in the scene for example? Blender seems to be able to open them fine, but I'm worried there might be some vertex color information or other type of metadata that it deletes when importing or re-exporting them.
CloudCompare
@@asherguild have you tried it? does it retain every bit of info for each vertex from the original file when you export/save it again?
CloudCompare wouldn’t be an easy solution. You would have to save all of the data associated with each point. That’s not something it does by default.
@@thenerfguru True for post processing, but I was thinking of preprocessing
What 360 Camera did you use for this Splat and which one do you recommend> I have the Insta360 RS One Inch and the Ricoh Theta X.
Those would work great. I have the RS One Inch
Nice display, thank you.
Fantastic stuff! Would this work using a colourised 3D point cloud obtained from a LiDAR scanner as well? If so, what point cloud file formats are produced in your workflow?
A point cloud is only used for initialization of training the scene. You would still need a lot of source images with parallax movement.
The output is a ply file. It takes a special renderer to display them accurately.
@@thenerfguru Thanks for this information. Any good tools you're aware of to mesh massive 3D point clouds (5-25 billion colorized 3D points)? I found the non-ml based approaches rather underwhelming.
Thanks for all your work (also on your older vids!)! I am running into a problem creating the conda env. It found conflicts, lists them and then fails to create the env. I then tried doing it manually but ran into the same problems. Any suggestions on how to start fresh? Tried un/reinstalling anaconda but that didn't fix it.. Thx again!
My guess it’s the build part of the conda environment. Not the dependencies. I added troubleshooting to the bottom of the GitHub page with a couple common fixes.
@@thenerfguru Thank you for the quick reply! I messed up.. I also had VS2022 installed but didn't set the checkmark at the Desktop Development with C++ option for it, only for VS2019. After that it got a little further, but got stuck again. Then i also added the path to the VS2022 cl.exe (like in your troubleshooting notes). Installed the pip packages manually and succesfully. Thanks again!
Can we use this as an alternative to Nerf (or in combination with) to convert video/images into "nicer" 3d models, addressing downsides of nerfs (garbled geometry, muddy "textures")?
Certain angles certainly look better compared to nerfs output (even stunning so), but I don't quite understand if these "gaussian points" can enable geometry estimation or provide better point colorss?
"Dumb" way that comes to my mind is to produce video/lots of pictures straight from 3d gausiann viewer (hoping to get the perfectly consistent stabilisation/angles/image quality compared to original video) and feed them into Nerfstudio, but I am sure this is not an optimal approach
😅
Hi Jonathan, I'm having trouble at the Installing the Optimizer. When entering the conda env create --file environmnet.yml in cmd. I get the following error 'conda' is not recognized as an internal or external command, operable program or batch file. Any ideas?
Either conda is not installed, or it's not added to PATH. Did you install Anaconda or Miniconda already?
Hi Jonathan, thanks for the reply. It worked after installing miniconda.
Are there limits to the size of the dataset? I have experiences NeRFs falling apart after beeing fed too many images. Have you heard of or experienced any limits of GS yourself?
You are going to run into limits on VRAM when training the data. I think more photos are fine, it just depends on what those photos are.
@thenerfguru hi, I have an idea that would utilise differentiable gaussian splatting, and I have a couple of questions you might be able to answer - would save me tons of time trying to figure it out myself.
1) Is orthographic camera available, in addition to perspective camera?
2) Currently the splats disappear when they are close to the camera (which makes sense for your use case). Would it be possible to disable this?
If both are true, it would be possible to "slice" through a set of splats, which might have interesting use cases. Vertex representation provided by pytorch3d is unfortunately unsuitable for my purposes...
Do you need like a iphone Pro to make the images to utilize? Because of the "laser" thing? or can be any footage, recording for any device?
Any photo. No lidar is involved.
Great tutorial. Thanks!
Thank you!
Dear author, hello. Regarding your Gaussian pasting, I would like to ask, actually, I want to use it to export 3D models. Yes, I see that the generated effect is very realistic, and I want to export the characters or objects in the scene separately to 3D formats such as OBJ for my own use. Can I do this?
Jonathan, thanks for doing this. As the data is effectively pointcloud based, can a wireframe mesh be made from the source data along the lines of poisson or similar methodology?
Also, what lighting data do we get from the result-- can this environment be used to calculate a 3D radiance (if not raytrace) model?
Very exciting stuff-- apologies if you've gone into detail on this elsewhere.
Great questions. You could always run the imagery in a photogrammetry suite to get a mesh, also, something like SDFStudio would be a great option.
All of the lighting is baked in. No one has really addressed the issue yet.
@@thenerfguru hrmmm.. you could, but if the GASP has already created a point cloud, it lives within a coordinate system so getting a 1:1 parity would be ideal.
I haven't been able to progress past the 18th minute, and I've been encountering errors for two days. I've started from scratch three times ;(
Pip subprocess error:
ERROR: Directory 'submodules/diff-gaussian-rasterization' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
failed
CondaEnvException: Pip failed
Thank you very much for making this more accessible!
You're very welcome!
Awesome video! Any suggestions where you can get high quality drone footage like this (following a similar orbit path)? I'm performing research around various NeRF / gsplat models
Awesome video! Have you seen the paper for Dynamic 3d gaussian splatting? Code is suppost to come out soon. It looks very exciting.
I have seen it! The only downside is that requires over a dozen cameras in an array recording simultaneously.
Hi, after starting training iI always get this line: Loading Training Cameras [30/09 14:41:23]
[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1 [30/09 14:41:23]
Loading Test Cameras [30/09 14:43:54]
Number of points at initialisation : 172044 [30/09 14:43:54] This part (INFO) Encountered quite large input images... Should I resize them or just leave as it is? Thanks.
Great video amazing tutorial skills. I would like to know if I can use 360 photos or does it have different setting
You can’t natively use 360 video. It needs to be separated into parts. I’ll make a tutorial soon.
good, I had an error when I used the python train.py -s line Traceback (most recent call last):
File "C:\Users\IvoJr\gaussian-splatting\train.py", line 13, in
import torch
ModuleNotFoundError: No module named 'torch' and I don't know how to solve it, can you help me?
hey Jonathan! First off, thanks again for posting this. This is incredible. Much appreciated. So I'm at the install for the gaussian_splatting environment in anaconda (also, when i launch a regular cmd I dont see the (base). I have to launch an anaconda prompt. I'm assuming thats fine as it's just the assigned terminal for the Anaconda environment?) So when I ran the second prompt, it did it's thing and then it gave me a "failed". I did a restart like you suggested, however I forgot to copy and paste the error in a terminal. Now when trying to run the second command again I get an error that the prefix already exists so it's clearly trying to write over the gaussian_splatting environment thats been built already. But how do I know if it's missing part of the installation? Should I just continue with the third step? Thanks so much man.
No problem if you don't see base. That's just me using Anaconda Prompt which launches in the base conda environment.
It sounds like your environment was built, but the submodules failed to install. You can install them manually. Most likely this was your issue (solution in the thread): github.com/graphdeco-inria/gaussian-splatting/issues/146
@@thenerfguru It looks like that may have done the trick Jon! Thanks so much. One last thing, is there a link tot he crane video just to try to duplicate the results at a first go? Since I'm new to these it would be great to have an asset that is done correctly to compare against for models that dont work so well. Thanks so much again. Really amazing work.
`conda install -c anaconda vs2019_win-64` before setting environment
@@thenerfguru Most likely why I am getting errors, I will try to fix it with adding visual studio path dependency
I'm at the "SETUP" step and can't build. How can I proceed please? Restarted and all. I would really like to be down with it!
Hi! Great project! How about adding support for EXR?))) It would be a fantastic opportunity to create HDR projects, such as those for cinema, for example. I just filming a low-budget historical movie, and I really appreciate your software!
It seems like the simple KNN repo is down, used for most of these Gaussian splatting Repos. So its very hard to follow any install instructions. Also, it seems like Torch is no longer available in the CONDA default sites anymore.
Wow!! this is really amazing. I'm new to Radiance Field methods and this is mind blowing. It's the future of all audiovisual industry. Also your tutorial is so incredibly detailed and well explained. Unfortunately I reached a point from which I couldn't continue because I work on Mac. Do you know if there's any resource where we could learn how to use gaussian splatting on an M1max macbook? Thank you very much in advance
For you, your best bet is use Luma AI’s new free Gaussian splats.
That sounds really promissing, I'll give it a try. Thank you so much for taking the time to answer all the questions so fast
Is there a way to export a LAS/LAZ file or some other format for the pointcloud that this generates? At a glance it looked like the export option was for exporting a video; I assume that's a video recording of the user moving around in the environment?
dankeschön. thank you for doind this, it was a very good explanation
Amazing tutorial. I've been looking forward to this. Unfortunately, the 24 GB VRAM requirement will keep me out of the game for a couple of years, or until the memory requirements are reduced. Anyway, thanks a lot!
What’s your current GPU?
It's currently an RTX 2060 with laughable 6 GB of VRAM. I'm buying a 4070 with 12 GB soon, but that will still be only half of the suggested minimum. Just for the experience, I started training a 120-image dataset (downscaled to FullHD resolution) and, to my surprise, I got no immediate out-of-memory errors. It started training relatively fast, but soon speed dropped to about 1 it/s. It's still training, but just too slow to be feasible.@@thenerfguru
So, I've decided to let it run for as long as it needs. It's at about 5k iterations now, going at about 3 it/s. I'll let you know if it manages to go all the way.
Final update: after many hours, training speed dropped to 0.1 it/s (10 s/it), which made me pull the plug. I got as far as 5600 iterations. I'll try again when I get the 12 GB 4070.
@@DGFig Hmmm. I am sorry to hear about that! Did you change any of the processing parameters or just saw how far it would go? Someone is working on a super fast training version of this that still needs more VRAM, however, you can modify the training variables to use less.
very cool tutorial, appreciate you made this.
I am also curious whether this can be used to the video/image sequences that contains human actions, like dancing or performance, instead of static pose or static objects.
Not with this project. Check out this project that tackles the problem: zju3dv.github.io/4k4d/
Hi!
Thanks!
If I have 2 iphones filming static videos from different angles is it possible to combine the fuutages to create 1 scene with better quality?
What about movement? Is it possible to animate gaussian splatting scene?
When I try to compile it I get errors saying that the code has issues. I have gcc and vs code. It is failing to build the wheel for diff-Gaussian-rasterization.
Tks for the tutorial, is it possible to export it at the end to a 3d format? (fbx, obj..)?
Awesome video. I tried installing nerf studio on Linux, but could not get it to run. Are you aware of any "ready to go" docker container for nerf studio?
Agreed, a docker container image and Dockerfile would be great to have..
For Nerfstudio or 3D Guassian Splatting? Nerfstudio has it on their documentation. This project unfortunately does not.
Hello, and thank you for the instructions! I keep getting an error message while attempting to create the environment. "Failed to build diff-gaussian-rasterization simple-knn" also "Pip subprocess error:
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [21 lines of output] and a few more underneath those.. any advice? thanks
Sorry to hear you ran into issues. I suggest starting here: github.com/graphdeco-inria/gaussian-splatting/issues/146
it worked!!!! thanks nerf guy, now on to the next step lol@@thenerfguru
Hey, I am getting a colmap error, "[option_manager.cc:815] Check failed: ExistsDir(*image_path)
ERROR: Invalid options provided.
ERROR:root:Feature extraction failed with code 1. Exiting." I tried following your instructions and using the full directory path which neither worked. Would appreciate any guidance.
Do you happen to have a space in either the folder or file names?
@@thenerfguru I have the same error, no spaces anywhere
non@@thenerfguru
fixed, just renamed the files to only numbers, no letters
@@D0m1n1c6 that makes no sense, but glad it worked!
Is Gaussian Splatting considerably different from instant NeRF? My results did not make a remarkable change for the same dataset.
Hey, thank you for an amazing tutorial ! I got onto this issue while opening the viewer.
[SIBR] ## ERROR ##: FILE C:\projects\gauss2\SIBR_viewers\src\projects\gaussianviewer\apps\gaussianViewer\main.cpp
LINE 140, FUNC main
Could not find config file 'cfg_args' at C:\Users\
do you know how to fix it ?
Is there a downside to allowing it to autoscale my images? I have my drone imagery in 4k resolution but it wants to default to 1600 width instead of 2160 (I know I can set r1)
Basically I am asking should I force native resolution or is it a waste
In my tests, it’s usually just a waste. However, you can try r 2: half scale. That may be a bump from 1600z
any chance to use these splats in UE? Would be a killer feature for Virtual Production in a LED cave, no more photogrammetry for backgrounds. This method is faster and simpler.
Video tutorial coming soon. You should be able to find it though.
What is the base coordinate system/axis used ? Right or left handed? Ie. Z up, Y forward?
For me the initial install failed and restarting fixed it also! Thanks for making this tutorial :)
how did you restart it if it already created the conda env? Did you delete the env first?
I got a few errors on my install but it did finish
yup, I first had to do conda env remove -n gaussian_splatting and then re-run (after restarting)@@AIWarper
Got it working after thanks!@@timiles8659
You’re welcome!
Is there a way to export this to Blender or other 3D software and maintain the same quality?
Not yet. Right now your best bet for a similar project is NerfStudio to Volinga to UE5
@@thenerfguru I would be incredibly interested in a tutorial for that!
@@hyperFixationStudios how about getting it into Unity?
@@thenerfguru that sounds awesome! I am not partial to any game engine, blender would work too! I am just trying to integrate these some how! It feels like nerfs in the sense I know this will be something in the future i just don't know what!
@@thenerfguru Hi , at that last phase where you're viewing the splat view in the viewer, is it not possible to 'save as' and save the .splat file to disk? There are some (experimental/nascent) viewers (notably one in JS/THREEJS) that can ingest splat files..