Getting Started With 3D Gaussian Splatting for Windows (Beginner Tutorial)

Поділитися
Вставка
  • Опубліковано 31 лип 2024
  • In this video, I walk you through how to install 3D Gaussian Splatting for Real-Time Radiance Field Rendering. I also walk you through how to make your own scenes with 3D Gaussian Splats. You do not need any prior programming or command prompt experience. See below for links to the modified repository I reference in the video as well as helpful text links that you will use in the video.
    Link to GitHub Repo: github.com/jonstephens85/gaus...
    FFMPEG Command to extract images from video:
    FFMPEG -i {path to video} -qscale:v 1 -qmin 1 -vf fps={frame extraction rate} %04d.jpg
    If you are unsure how to capture images or video for your own scene. I recommend this guide that I made for ‪@EveryPoint‬: • How to Capture Images ...
    Original 3D Guassian Splats project page: repo-sam.inria.fr/fungraph/3d...
    Show Timeline:
    00:00 Intro & What are 3D Gaussian Splats Are
    01:52 3D Gaussian Splatting Workflow Overview
    03:22 Installing Dependencies/Requirements
    12:53 Cloning the Repository
    15:05 Setting Up the Optimizer
    18:00 Preparing Your Images for the Optimizer
    27:47 Optimizing the Data
    31:49 Running the Real-Time Viewer
    Please follow my channel for advanced tips and more informational videos on computer vision!
    Follow me on LinkedIn: / jonathanstephens
    Follow me on Twitter: / jonstephens85
  • Наука та технологія

КОМЕНТАРІ • 680

  • @markinmkn
    @markinmkn 9 місяців тому +3

    After 3 days trying I get!! I just want to say thank you for your tutorial!! I have some troubles and after solving that the results is really amazing!!

  • @patrickcasella
    @patrickcasella 10 місяців тому +10

    been seeing this everywhere, but nobody wanted to show how to do it! Thank you so much for this. I'm on my way to make some really awesome art. Keep it up!!

    • @thenerfguru
      @thenerfguru  10 місяців тому

      Godspeed! If you post anything on social, tag me and I’ll reshare it. I’m fairly active on LI and X

  • @Raketenclub
    @Raketenclub 10 місяців тому +65

    man i remember we learned the theory in the 90s... movement detection due to motion blur and calculate / recreate vectors and depths in a blackn white photo... but the computerpower was so limited we were not really able to proceed......... working on paper, lol, later volumetric clouds came in and now we have this.....this is truely awesome..... something i had dreamed of for decades, recreated my own 3d scenes for months, now its only a video and photos you put in an ai. i am totally flashed.

    • @thenerfguru
      @thenerfguru  10 місяців тому +14

      In theory, this definitely wasn’t new! Now we have the compute hardware, we live in an amazing time.

  • @w000w00t
    @w000w00t 11 місяців тому +13

    This video is great! Thank you for being so generous with your knowledge about this cutting edge technology! ...and the command prompt tips were GOLD! :)

    • @thenerfguru
      @thenerfguru  11 місяців тому +5

      Thank you! I usually find the biggest hurdle for people to get into this tech is understanding command prompt. I find it funny that I struggle with Unreal Engine blueprints which is arguably easier because it's visual.

    • @karineavetisyan7767
      @karineavetisyan7767 9 місяців тому

      I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer.

  • @pseudopod77
    @pseudopod77 2 місяці тому +1

    you nailed it! thank you. had been led astray on other tuts and this one was very straight forward. I should drive all the confused viewers from other tuts this way.

  • @artiexus
    @artiexus 2 місяці тому +2

    Thank you very much for making this video! You're a great teacher, I had no problems following this and I'm very happy with how my first splat turned out.

  • @OlliHuttunen78
    @OlliHuttunen78 11 місяців тому +24

    Hey this is very cool! Thank you a lot. I finally managed to generate my first Gaussian Splatting model with your instructions. Lot of steps in here but it finally works! Nice that you have made windows version to github from the original. Thank you! This is so intresting!

    • @thenerfguru
      @thenerfguru  11 місяців тому +1

      Thanks for the comment! Have fun with it!

    • @karineavetisyan7767
      @karineavetisyan7767 9 місяців тому +1

      I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer.

  • @tjjtan8461
    @tjjtan8461 6 місяців тому

    Really COOOOL video! Thanks for the guide and I successfully made my own 3D model!

  • @dialectricStudios
    @dialectricStudios 11 місяців тому +5

    the coolest advancement in neural rendering! amazing stuff man

    • @thenerfguru
      @thenerfguru  11 місяців тому +3

      I agree! It is so amazing because it renders in real-time and looks crazy detailed.

    • @dialectricStudios
      @dialectricStudios 11 місяців тому +1

      @@thenerfguru the first game engine to utilize this technology will make billions. Gaben take notes

  • @dested1
    @dested1 10 місяців тому +17

    IT worked first try, this is incredible. I have a 3d cloud model of my living room forever. Great tutorial, super detailed.

    • @thenerfguru
      @thenerfguru  10 місяців тому

      So awesome to hear that! People usually struggle to capture rooms.

    • @karineavetisyan7767
      @karineavetisyan7767 9 місяців тому

      Hey
      I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer.
      Please, help me if you have found a solution for this.

  • @bernhardkerbl1560
    @bernhardkerbl1560 11 місяців тому +376

    Hi, one of the authors here, this is awesome, thanks so much for doing this! Is it ok if we link this on the Github page?

    • @thenerfguru
      @thenerfguru  11 місяців тому +61

      Absolutely!

    • @thenerfguru
      @thenerfguru  11 місяців тому +59

      Thanks for creating the project! I love where the technology is headed.

    • @thebigpicture1052
      @thebigpicture1052 11 місяців тому +26

      Hello author. Please make it export to unreal engine or blender with modelling data from cheap lidar or generated from 2d to 3D ai.
      Lot of indie filmmaker always want to do vfx. This skip a lot of things which only studio with millions of dollar could do afford to do.
      if a decent pc cn render whats on a virutal camera frame after importing it in unreal engine n it would revamp entire market n small player... Thank you in advance ❤.

    • @crossybreed
      @crossybreed 11 місяців тому +2

      Hello author,
      One beginner question.
      Why only cuda based ? Will it run on Ryzen 5 5800G and above using APU as VRAM ?
      As this guy showed how to run AI models on CPU! He showed Cuda based AI running on CPU and that results are almost equals to RTX 4080! Is it possible?
      ua-cam.com/video/H9oaNZNJdrw/v-deo.htmlsi=A0FjUtKPOUaERFJ2

    • @skinkie
      @skinkie 10 місяців тому +2

      @@crossybreedtypically the answer is, that the software development for cuda has been more mature for a longer time. That does not mean that cuda is 'stable' especially since its support typically requires recent hardware. And where cuda supported older hardware, more recent development does not support those cuda versions. In that respect AMD has done crazy things theirselves for example with (or not anymore) supporting OpenCL. And only recently HIP starts to appear, which (obviously) requires porting existing CUDA and OpenCL code. It is only very recently that you see academic work on github, before the code you saw in papers was never released. I don't think it is in the researchers best interest to make the most portable code, but in essence just prove that an idea works.

  • @BjarneKort
    @BjarneKort 11 місяців тому +3

    Been looking forward to this, thank you!

  • @kurtnana9927
    @kurtnana9927 6 місяців тому

    Very Helpful Guide. Thank you so much for making this video and saving a lot of time for others trying to understand and install the program :D

  • @SGRuLeZ_Art
    @SGRuLeZ_Art 9 місяців тому

    Thank you, Jonathan! It works and it causes a storm of emotions in me. Just crazy!)

  • @JankyzFSX
    @JankyzFSX 9 місяців тому +2

    Amazing job guys! As a surveyor i know photogrammetry and this is another level ♥

  • @mars8164
    @mars8164 4 місяці тому +1

    Very detailed instructional video, love from new beginner. :)

  • @sirens3237
    @sirens3237 6 місяців тому +2

    I cannot thank you enough for this. I went from knowing basically nothing to a completed scene in 3 hours.
    It would have been much faster but I had the wrong CUDA version.
    You absolutely nailed this tutorial!

    • @thenerfguru
      @thenerfguru  5 місяців тому

      Thank you!

    • @marcosmoura911
      @marcosmoura911 3 місяці тому +1

      Which version did you have and which you did you need ?

  • @yanmeiwang-km4ko
    @yanmeiwang-km4ko 3 місяці тому

    Thank you very much for the video, it helped me a lot. Thank you for your generous sharing!

  • @renderinmymind
    @renderinmymind 10 місяців тому +1

    soo good man running perfectly. Thanks for the video

  • @JOMFRUHOLMEN
    @JOMFRUHOLMEN 9 місяців тому +1

    This is going to change the world.

  • @attentiondeficitdisorder
    @attentiondeficitdisorder 10 місяців тому

    Thank you for the tutorial. Very helpful.

  • @ananpinya835
    @ananpinya835 11 місяців тому +1

    Wow amazing, this is great work! I am looking forward of applying this technology on google map's street view.

    • @thenerfguru
      @thenerfguru  10 місяців тому +1

      I'm sure this technology or similar will be integrated into their new Immersive View.

  • @therealkhroma
    @therealkhroma 11 місяців тому +1

    Much thanks. Will be trying this out tomorrow.

    • @thenerfguru
      @thenerfguru  11 місяців тому

      Great! Let me know how it goes!

  • @jonatan01i
    @jonatan01i 10 місяців тому

    Thank you very much for making this more accessible!

  • @tokyowarfare6729
    @tokyowarfare6729 11 місяців тому +1

    I've a 3090 TI, I feel blessed that I'll be able to mess with this tech. Tx a lot for hte tutorial.

  • @qbaismo728
    @qbaismo728 9 місяців тому

    dankeschön. thank you for doind this, it was a very good explanation

  • @Woojtyla
    @Woojtyla 9 місяців тому

    Ur the best! thanks for this tutorial mate❤❤❤

  • @carlosreynoso8303
    @carlosreynoso8303 9 місяців тому +1

    Great tutorial. Thanks!

  • @Peluche070
    @Peluche070 9 місяців тому

    we used to use this 13 years ago for 2D--3D streo conversion with polar axis. :) Now its realtime hardware based. It used to be software based on depth maps layed out manually. :) now the hardware auto creates depth maps that changes it.

  • @AzadBalabanian
    @AzadBalabanian 11 місяців тому

    Very useful tutorial. Thanks mate

    • @thenerfguru
      @thenerfguru  11 місяців тому

      Glad you found it useful! More coming later this week.

  • @ZergRadio
    @ZergRadio 9 місяців тому +3

    Nice tutorial. I would like to give a suggestion. When showing webpage's or application it would be great if one could zoom in, so the text is bigger, so people with problematic sight could see what is on screen :)
    Thanks

  • @mn04147
    @mn04147 19 днів тому

    wow you are so kind this video is so easy to understand tank you

  • @user-gx7un8uu7h
    @user-gx7un8uu7h 7 місяців тому

    Awesome Tutorial!

  • @Bantammenace1903
    @Bantammenace1903 11 місяців тому +5

    Bravo. I will give this a go with images captured with my Leica BLK3D and compare the result with the coloured textured mesh created from the same images in RealityCapture.

    • @thenerfguru
      @thenerfguru  11 місяців тому +2

      Share the results! Completely different outputs and visualization techniques.

    • @rubyspot
      @rubyspot 11 місяців тому

      I would love to see it. Do you have LinkedIN or something?

  • @esuelle
    @esuelle 10 місяців тому +3

    I managed to get pretty cool results on a RTX2070S with just 5000 iterations. The training step was pretty fast too, under 10 minutes! Incredible technology. I can only imagine how much more incredible it's going to get in a couple of years. Thanks for the video, everything was explained very clearly.

    • @thenerfguru
      @thenerfguru  10 місяців тому

      Glad it worked well for you! I bet you could run it all to at least 7000 iters as well.

    • @karineavetisyan7767
      @karineavetisyan7767 9 місяців тому +1

      I've trained the model using Gaussian splatting. I can see the result in nerfstudio and render video. But how to get .ply or .obj file of mesh so I can see my result with colours with any 3d viewer.

  • @manu.vision
    @manu.vision 11 місяців тому +1

    Well done!!

  • @VaunaKiller
    @VaunaKiller 10 місяців тому +1

    Imagine capturing images for google street view using this technique 😃 I won't leave home ever again

  • @vikramsandu6054
    @vikramsandu6054 29 днів тому

    Amazing. Thanks a lot.

  • @culpritdesign
    @culpritdesign 10 місяців тому

    Thanks for sharing!

  •  10 місяців тому +17

    I was getting errors with "diff-gaussian-rasterization" and "simple-knn" when running the command "conda env create --file environment.yml". I just wanted to let others with the same problem know what I did to solve it.
    I had to add "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64" to PATH in order for the build command to find cl.exe.
    And since the "conda env create" command was aborted before it was finished I had to start a new CMD window after adding to PATH and rerun the commands, ending with "conda activate gaussian_splatting" and from there I typed "conda env update --file environment.yml" to start the build process again. Hope it helps someone. 😊

    • @lilmanpurse8603
      @lilmanpurse8603 10 місяців тому

      I had the same issue and this tip didnt work, i got the exact same errors again. I have tried everything i could find to address the issues i am getting but it seems like im at a dead end here. Hope someone can make a more accessible version soon

    • @herrdrfink1020
      @herrdrfink1020 10 місяців тому

      Make sure to run this commands with Administrator rights after adding the path.@@lilmanpurse8603
      activate
      pip install -e submodules/diff-gaussian-rasterization
      pip install -e submodules/simple-knn

    •  10 місяців тому

      ​@@lilmanpurse8603scroll down to the bottom of the git and check the common issues section. I saw that a more complete step by step instruction was there last time I checked. It might be a small step missed. Like not closing your command window in between changes to the PATH for instance. Hope you find a solution.

    • @user-vt9qk9xc2k
      @user-vt9qk9xc2k 4 місяці тому

      @@lilmanpurse8603 i have tried two ways to solve that, one is the adding of the VS path just this bro said above, and i tried to change the vision of my cudatoolkit into 11.8 from 12.3. I dont know which of these worked, but i solve it now.
      Very necessary to remind is that, after you make some changes in environment, restarting the computer may be very effective.😂😂

  • @blynch1751
    @blynch1751 9 місяців тому +1

    Awesome video! Any suggestions where you can get high quality drone footage like this (following a similar orbit path)? I'm performing research around various NeRF / gsplat models

  • @Daexx5
    @Daexx5 14 годин тому

    I was here before it went mainstream. It's awesome.

  • @Thats_Cool_Jack
    @Thats_Cool_Jack 10 місяців тому

    It feels like just yesterday when I tried out Nvidia nerfs and was disappointed by it's limited use but now here we are with visuals rendering out in real time smoothly :o

    • @Thats_Cool_Jack
      @Thats_Cool_Jack 10 місяців тому

      wow thats a lot of vram XD

    • @thenerfguru
      @thenerfguru  10 місяців тому

      @@Thats_Cool_Jack The requirement is dropping fast.

  • @user-ri9sv3uv5i
    @user-ri9sv3uv5i 8 місяців тому

    Thank you very much!

  • @thestatpow5
    @thestatpow5 10 місяців тому

    Thank you! I managed to make a very basic one with 4000 iters on a 2060RTX not a lot of VRAM but it's something!

  • @dougdaniels7848
    @dougdaniels7848 9 місяців тому +5

    Man I don't even know where to start... I've been struggling with the submodules diff-gaussian-rasterization and simple-knn and have been combing through the GitHub comments about the same issue for hours... nothing seems to work. Do I just need to uninstall all components and start fresh? Seems to be a lot of confusion about which CUDA version to use and how to install torch.

  • @360_SA
    @360_SA 11 місяців тому +1

    Great video amazing tutorial skills. I would like to know if I can use 360 photos or does it have different setting

    • @thenerfguru
      @thenerfguru  11 місяців тому +4

      You can’t natively use 360 video. It needs to be separated into parts. I’ll make a tutorial soon.

  • @TheMouseair
    @TheMouseair 10 місяців тому +1

    very cool tutorial, appreciate you made this.
    I am also curious whether this can be used to the video/image sequences that contains human actions, like dancing or performance, instead of static pose or static objects.

    • @thenerfguru
      @thenerfguru  9 місяців тому

      Not with this project. Check out this project that tackles the problem: zju3dv.github.io/4k4d/

  • @bibimblapblap
    @bibimblapblap 9 місяців тому

    when running convert.py, colmap appears utilize my GPU initially for feature extraction but then completely relies on the CPU for retriangulation / global bundle adjustment, which takes a long time. is there any way to tell it to use the GPU for this to speed things up?

  • @TRVRBR
    @TRVRBR 9 місяців тому +1

    Hi! Great project! How about adding support for EXR?))) It would be a fantastic opportunity to create HDR projects, such as those for cinema, for example. I just filming a low-budget historical movie, and I really appreciate your software!

  • @topvirtualtourscom
    @topvirtualtourscom 10 місяців тому

    Hi, after starting training iI always get this line: Loading Training Cameras [30/09 14:41:23]
    [ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
    If this is not desired, please explicitly specify '--resolution/-r' as 1 [30/09 14:41:23]
    Loading Test Cameras [30/09 14:43:54]
    Number of points at initialisation : 172044 [30/09 14:43:54] This part (INFO) Encountered quite large input images... Should I resize them or just leave as it is? Thanks.

  • @VaunaKiller
    @VaunaKiller 10 місяців тому +1

    Can we use this as an alternative to Nerf (or in combination with) to convert video/images into "nicer" 3d models, addressing downsides of nerfs (garbled geometry, muddy "textures")?
    Certain angles certainly look better compared to nerfs output (even stunning so), but I don't quite understand if these "gaussian points" can enable geometry estimation or provide better point colorss?
    "Dumb" way that comes to my mind is to produce video/lots of pictures straight from 3d gausiann viewer (hoping to get the perfectly consistent stabilisation/angles/image quality compared to original video) and feed them into Nerfstudio, but I am sure this is not an optimal approach
    😅

  • @dougdaniels7848
    @dougdaniels7848 9 місяців тому

    Is there a way to export a LAS/LAZ file or some other format for the pointcloud that this generates? At a glance it looked like the export option was for exporting a video; I assume that's a video recording of the user moving around in the environment?

  • @kishekzun
    @kishekzun 8 місяців тому

    Hi , first huge thanks for the video ! First time getting into it and it's working great.
    I have a question : i have a good recent pc ( 14900k etc...) but i still have my 3080 10gb and i struggle a bit. Is it possible to change the checkpoint to 5000 its for example ? To do some tests ? How and where can i change that ?
    At the begenning of the learning it goes to like 20its and then im truck between 5 and 6 so it's prety long.
    Again thanks and keep up the good work

  • @belzebubukas
    @belzebubukas 9 місяців тому

    @thenerfguru hi, I have an idea that would utilise differentiable gaussian splatting, and I have a couple of questions you might be able to answer - would save me tons of time trying to figure it out myself.
    1) Is orthographic camera available, in addition to perspective camera?
    2) Currently the splats disappear when they are close to the camera (which makes sense for your use case). Would it be possible to disable this?
    If both are true, it would be possible to "slice" through a set of splats, which might have interesting use cases. Vertex representation provided by pytorch3d is unfortunately unsuitable for my purposes...

  • @TheLorsange
    @TheLorsange 8 місяців тому +1

    Hi, Great tutorial, thanks a lot!!!
    Is there any explanation how to implement CUDA12? Didn't find it in repo

  • @Trendish_channel
    @Trendish_channel 4 місяці тому

    Hi!
    Thanks!
    If I have 2 iphones filming static videos from different angles is it possible to combine the fuutages to create 1 scene with better quality?
    What about movement? Is it possible to animate gaussian splatting scene?

  • @SolidClouds_NL
    @SolidClouds_NL 10 місяців тому +1

    Awsome Video! We are getting this message while training the model: Tensorboard not available: not logging progress [30/09 12:35:06]
    Reading camera 990/990 [30/09 12:35:15]
    Loading Training Cameras [30/09 12:35:15]
    [ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
    If this is not desired, please explicitly specify '--resolution/-r' as 1 [30/09 12:35:15] .
    other then that it seems like it is busy because it not allowing me to add a new cmd.
    Do you know how to resolve this?

  • @zahir3d
    @zahir3d 9 місяців тому

    Tks for the tutorial, is it possible to export it at the end to a 3d format? (fbx, obj..)?

  • @gridvid
    @gridvid 10 місяців тому +6

    Correct me if I'm wrong, but it should be possible to convert any 3D blender scene into Gaussians because we already know the position of the virtual camera from the rendered 3D scene. So it should be possible to create photorealistic, interactive scenes pretty fast from 3D scenes created in blender from a few rendered images using cycles. This could be a revolution in gaming as well.

    • @mattizzle81
      @mattizzle81 10 місяців тому +1

      Indeed you could render some pretty computationally expensive scenes and convert it to this format.

    • @Alex29196
      @Alex29196 9 місяців тому +1

      @@mattizzle81 I converted a 3D blender scene into Gaussians, and looks amazing. You can make movie production with a low en pc if you render de scene in eevee.

    • @robertmayster7863
      @robertmayster7863 7 місяців тому

      Not really convinced that a triangle based, photorealistic blender scene renders faster or better if you convert it to gaussians. It's just a different, more complex way to represent the same dataset. I would argue you actually lose performance and quality since you approximate the original geometry and shaders with particles and 4th order spherical harmonics.

    • @zyang056
      @zyang056 7 місяців тому

      Use cycles for offline rendering. GS for realtime rendering with the lights and transparency baked in.@@robertmayster7863

  • @collin6526
    @collin6526 10 місяців тому

    When I try to compile it I get errors saying that the code has issues. I have gcc and vs code. It is failing to build the wheel for diff-Gaussian-rasterization.

  • @BOLL7708
    @BOLL7708 11 місяців тому +5

    I don't have the machine for this, but I immediately want to see what one of these splatted scenes look like in a VR headset. 😁

    • @thenerfguru
      @thenerfguru  11 місяців тому +8

      Keep an eye out for VR enabled 3D Gaussian Splats. Plus, it takes a lot less GPU power to view these. Training is the VRAM bottleneck

    • @olgaforce6678
      @olgaforce6678 11 місяців тому +6

      I just ran it with a 3070 GPU with 8GB

    • @andershjemdahl6547
      @andershjemdahl6547 11 місяців тому

      @@thenerfguruI can’t wait! What res do you expect to be able to pull off, say on a Quest 2 using Steam vs HMD only?

    • @TomasDelBianco
      @TomasDelBianco 10 місяців тому

      ​@@olgaforce6678 Did you manage to train a model with a 3070?

    •  10 місяців тому

      @@olgaforce6678how did it go?

  • @dragonferno8
    @dragonferno8 10 місяців тому

    @thenerfguru I having some problem with the setup stage idk what wrong am I doing. I am getting some errors when I enter conda env create --file environment.yml
    I am not able to install the pip dependencies. I did go to github repo and asked for help but trying all solution given by the people I was not able to get the error fixed. Can you please help me out?

  • @MatthewRumble
    @MatthewRumble 10 місяців тому +1

    I am having the error on the command prompt of "conda is not recognized as an internal or external command". I'm stuck at 16:30. I installed anaconda and restarted my computer but it didn't work

    • @thenerfguru
      @thenerfguru  10 місяців тому

      Without seeing the exact error code, I can’t for sure know what is happening. See if this solves your issue: github.com/graphdeco-inria/gaussian-splatting/issues/146

  • @aimattant
    @aimattant 8 місяців тому

    Excellent. I guess for the coding world on GitHub models and PY, windows is a must to try out these models. Unless there is a Mac workaround - any advice?

  • @neobahiastudio
    @neobahiastudio 8 місяців тому

    good, I had an error when I used the python train.py -s line Traceback (most recent call last):
    File "C:\Users\IvoJr\gaussian-splatting\train.py", line 13, in
    import torch
    ModuleNotFoundError: No module named 'torch' and I don't know how to solve it, can you help me?

  • @marceloaraujo3468
    @marceloaraujo3468 8 місяців тому

    Olá, eu tive problemas com o environment.yml, mas eu usei o conda env update -f aparentemente está dando certo.... obrigado!

  • @yaroslavkozlitin6869
    @yaroslavkozlitin6869 8 місяців тому

    Just curious, if I have kinda greater Graphic card like 4070ti, but its 12 VRAM which is less then 3090 24 VRAM , will it work?
    Thx!

  • @jag24x
    @jag24x 11 місяців тому

    Thanks!

  • @vjcatalyst7830
    @vjcatalyst7830 10 місяців тому +1

    Awesome video! Have you seen the paper for Dynamic 3d gaussian splatting? Code is suppost to come out soon. It looks very exciting.

    • @thenerfguru
      @thenerfguru  10 місяців тому +1

      I have seen it! The only downside is that requires over a dozen cameras in an array recording simultaneously.

  • @Tos985
    @Tos985 10 місяців тому

    You're a great teacher, really easy to follow, thank you very much. Is it possible to export a mesh or point cloud? I'm very new to this, but I saw that on instant NGP there is a "mesh it" button. Is it the same here?

    • @thenerfguru
      @thenerfguru  10 місяців тому

      This project does not have the capability of meshing. Perhaps on a follow on project.

  • @BlaiGraell-kh4wq
    @BlaiGraell-kh4wq 10 місяців тому +1

    Wow!! this is really amazing. I'm new to Radiance Field methods and this is mind blowing. It's the future of all audiovisual industry. Also your tutorial is so incredibly detailed and well explained. Unfortunately I reached a point from which I couldn't continue because I work on Mac. Do you know if there's any resource where we could learn how to use gaussian splatting on an M1max macbook? Thank you very much in advance

    • @thenerfguru
      @thenerfguru  9 місяців тому

      For you, your best bet is use Luma AI’s new free Gaussian splats.

    • @BlaiGraell-kh4wq
      @BlaiGraell-kh4wq 9 місяців тому

      That sounds really promissing, I'll give it a try. Thank you so much for taking the time to answer all the questions so fast

  • @EconaelGaming
    @EconaelGaming 10 місяців тому +1

    Any way to set the size of the render target of the viewer? The resolution of the rendered image is kinda small. I'd like to try 4k.
    EDIT: --rendering-size 3840 2160
    getting cinematic 24fps on a 1660Ti !!

    • @thenerfguru
      @thenerfguru  10 місяців тому

      So that sizes the viewer. My hunch is that when you trained the data, the images were downscaled to 1.6k. That's fine though, you get awesome results still.

  • @ethioanimation3898
    @ethioanimation3898 4 місяці тому

    Hi i have a question. I am new to all of this but can i export like fbx or obj format to use in 3d software

  • @StevePaulSounds
    @StevePaulSounds 9 місяців тому

    I'm on a 2080 Ti - Whenever I try to keyframe in the viewer I can't move the camera anymore. Is there something wrong with the software, my graphics, or am I doing something wrong? How's it supposed to work?

  • @xMemn0nx
    @xMemn0nx 8 місяців тому

    The conda env install would keep failing for me until I finally installed python 3.7. I already had 3.10 and 3.11 installed, but apparently it wasnt compatible.

  • @Because_Reasons
    @Because_Reasons 11 місяців тому

    Very cool, thanks for this. Curious, will this take 4k video data from like a DSLR? Also, is there a way to export these into a web viewable format?

    • @thenerfguru
      @thenerfguru  10 місяців тому +1

      You can use 4K imagery from a DSLR. Just ensure the output imagery is consistently exposed and does not contain too much motion blur. You can use FFMPEG to extract stills.

  • @leeishere7448
    @leeishere7448 12 днів тому

    This is amazing. I wish I could use it, but I don't have enough VRAM for this, I have a 3070 ti unfortunately. I hope it will work for GPUs that have 8 GB VRAM in the future.

  • @ilghera3555
    @ilghera3555 9 місяців тому

    can you export it to a 3d scenery such as .obj with this method? would look damn cool. also got to render a small scene with just a 3050 in 1 hour and 30! theres hope

  • @LTE18
    @LTE18 10 місяців тому +3

    Thanks for the video and managed to get it working.
    As someone who is new to this radiance field rendering.. may I ask is there a way to use this data and convert it into a 3d object? Like 3d scanning?
    What uses/application can I use this data for besides viewing it with a viewer? Thanks

    • @thenerfguru
      @thenerfguru  10 місяців тому +1

      If your goal is 3D modeling and not novel view synthesis, I suggest some other method such as Neuralangelo. Meshes was not the intended goal of this project.

  • @tobiasdierl4557
    @tobiasdierl4557 3 місяці тому

    I love you ;)

  • @ParinithaRamesh-qf2ig
    @ParinithaRamesh-qf2ig Місяць тому

    Hey, Could you maybe make a video on the colab file that is provided in the main code, there is no test file to evaluate the model or see the output in any way

  • @omnionmedia
    @omnionmedia 7 місяців тому

    What is the base coordinate system/axis used ? Right or left handed? Ie. Z up, Y forward?

  • @ADUuniverse
    @ADUuniverse 10 місяців тому +2

    \gaussian_renderer\__init__.py", line 14, in
    from diff_gaussian_rasterization import GaussianRasterizationSettings, GaussianRasterizer
    ModuleNotFoundError: No module named 'diff_gaussian_rasterization'
    It showing me this error! tried to solve it but it isn't working... PLS help!

  • @z-time3291
    @z-time3291 8 місяців тому

    ii See in post the big error for all is: error installing diff_gaussian_rasterization
    for X or Y reasson, i install all dependencies and still got this error in my machine.

  • @user-nb2tf1ee6u
    @user-nb2tf1ee6u 9 днів тому +1

    During the process of using gaussian-splatting-Windows, when the image files originally in the 'input' folder are processed by python convert.py -s and output to the 'images' directory, they reduce in number from dozens to just a few, and these images now display a wide-angle effect. What could be the issue, and how should one go about troubleshooting this?

  • @Atlas_Redux
    @Atlas_Redux 9 місяців тому +2

    I am happier and happier every day for getting an RTX 4090...

    • @thenerfguru
      @thenerfguru  9 місяців тому +1

      Haha! Yes!

    • @Atlas_Redux
      @Atlas_Redux 9 місяців тому

      @@thenerfguruHave you had any luck opening it in Blender or Maya? I'm getting no textures, no matter what. I can open Meshroom solutions just fine in Blender, but that is of course a different worse method. I bought the plugin for Unreal Engine though, and that works like a charm.

  • @neilbradley9035
    @neilbradley9035 10 місяців тому

    Hey, tried to follow but had a lot of trouble with Colmap stuff. Don't really understand how to get it working. :(

  • @tightoa
    @tightoa 10 місяців тому

    I'm at the "SETUP" step and can't build. How can I proceed please? Restarted and all. I would really like to be down with it!

  • @Seemarq
    @Seemarq 10 місяців тому

    hi Jonathan, I have one issue starting the ffmpeg.
    it show me : C:\Users\(User): No such file or directory
    I think it was cause my username has a space

  • @liquidmasl
    @liquidmasl 10 місяців тому +1

    So, i have a high density laser scanned pointcloud, and i have 360 pano images every few meters, it should be possible to use that directly right?
    my pointcloud is super high density and the points have color already, but for training it will still need the panos i suppose?
    damn i want to get this to work

    • @thenerfguru
      @thenerfguru  10 місяців тому

      Still need the panos if you were to hack it. You would also need camera poses. The sparse point cloud is just an initial set of locations for splats to be located at. From there, the training will move and morph the splats to fit the scene.

  • @Zephirus10
    @Zephirus10 10 місяців тому

    Great tutorial, thank you! I'd like to know if the resulting model and point cloud can be accurately geolocated/georeferenced for use in GIS 3D analysis scenarios?

    • @thenerfguru
      @thenerfguru  10 місяців тому

      I bet it is possible, however, that would take additional development time. I can look at the sparse point cloud from my model and see if it has any geolocation component. Most likely not unless I processed data with known GPS data.

    • @Zephirus10
      @Zephirus10 10 місяців тому

      @@thenerfguru thanks! I know it was a long shot to ask but thought maybe there are options. I will test the point cloud I made however I too do not have GPS data for my first-try test. I have to compliment your tutorial once again as it was so well explained that even with my novice understanding I could produce a fairly decent model.

  • @user-xn7ol1gh6c
    @user-xn7ol1gh6c 6 місяців тому

    Dear author, hello. Regarding your Gaussian pasting, I would like to ask, actually, I want to use it to export 3D models. Yes, I see that the generated effect is very realistic, and I want to export the characters or objects in the scene separately to 3D formats such as OBJ for my own use. Can I do this?

  • @MrJenius
    @MrJenius 10 місяців тому +2

    Is it possible for the output of this to be converted to some kind of object file? Or does it output as that natively as an option? I am wondering if it would be possible to use this to scan objects and then bring them into VR

  • @GreenTea-101
    @GreenTea-101 10 місяців тому +2

    Jonathan, thanks for doing this. As the data is effectively pointcloud based, can a wireframe mesh be made from the source data along the lines of poisson or similar methodology?
    Also, what lighting data do we get from the result-- can this environment be used to calculate a 3D radiance (if not raytrace) model?
    Very exciting stuff-- apologies if you've gone into detail on this elsewhere.

    • @thenerfguru
      @thenerfguru  10 місяців тому +2

      Great questions. You could always run the imagery in a photogrammetry suite to get a mesh, also, something like SDFStudio would be a great option.
      All of the lighting is baked in. No one has really addressed the issue yet.

    • @GreenTea-101
      @GreenTea-101 10 місяців тому

      @@thenerfguru hrmmm.. you could, but if the GASP has already created a point cloud, it lives within a coordinate system so getting a 1:1 parity would be ideal.

  • @AIWarper
    @AIWarper 7 місяців тому

    Have there been any improvements or changes to this space? I know all of this is moving sooooo fast and I see on Twitter new repo's for splatting and related stuff what seems like every other day
    Would be great to get a follow up!

  • @gillesbillard8525
    @gillesbillard8525 10 місяців тому

    Hi from Fance;
    What about D3s stéreos ( 2 points of view at the same time mixed in one anaglyphe scène or displayed on 2 screens or on a single screen shared in two halves ?
    BTW: Thanks for the tutorial.

    • @robertmalzan6724
      @robertmalzan6724 3 місяці тому

      Judging from this video, you only need to use 1 half (left eye or right eye) to get the full 3D splat

  • @EventHorizonVR2023
    @EventHorizonVR2023 11 місяців тому +1

    What 360 Camera did you use for this Splat and which one do you recommend> I have the Insta360 RS One Inch and the Ricoh Theta X.

    • @thenerfguru
      @thenerfguru  11 місяців тому

      Those would work great. I have the RS One Inch

  • @marcosmoura911
    @marcosmoura911 3 місяці тому

    is it possible to move around with the arrows and the mouse ? Trackball is so bad..

  • @BrunoPiresBPS
    @BrunoPiresBPS 10 місяців тому +1

    Do you need like a iphone Pro to make the images to utilize? Because of the "laser" thing? or can be any footage, recording for any device?

    • @thenerfguru
      @thenerfguru  10 місяців тому

      Any photo. No lidar is involved.