3D Gaussian Splatting First Impressions

Поділитися
Вставка
  • Опубліковано 27 лис 2024

КОМЕНТАРІ • 267

  • @GL-GildedLining
    @GL-GildedLining Рік тому +217

    Transcendental progress, happening so suddenly. Just a year ago, NeRF already seemed like magic! ^‿^

  • @jordanmuller2536
    @jordanmuller2536 Рік тому +102

    Gaussian Splatting totally reminds me of the 3D crime scene scans like in Star Trek: Into Darkness. Amazing stuff.

  • @artavenuebln
    @artavenuebln Рік тому +6

    i am 38 years old, working with AI, 3D software like blender and still this blowed my mind like back when i was 14 .... wow.

    • @thenerfguru
      @thenerfguru  Рік тому +4

      I’m 38 years old, works in 3D software, 3D reconstruction from imagery, etc and I was blown away!

    • @artavenuebln
      @artavenuebln Рік тому +1

      @@thenerfguru will try it the next days with my rtx 3070 and wish for the best, haha.

    • @haloandrei
      @haloandrei Рік тому

      @@artavenuebln Keep us updated! I want to know how it performs!

    • @ali09gaming58
      @ali09gaming58 Рік тому

      How do I use this ? Do I need unreal engine

  • @Ponk_80
    @Ponk_80 Рік тому +2

    Google maps is going to look insane if this gets implemented.

  • @data9k
    @data9k Рік тому +24

    Excellent results, but props to your flying pattern and skills, the quality of input is crucial, can you give us more information about the flight?

    • @thenerfguru
      @thenerfguru  Рік тому +14

      I plan on it. Reminder that I need to do an episode on image capture.

  • @UltimatePerfection
    @UltimatePerfection Рік тому +9

    If I didn't see the UI on the thing, I'd legit think it's some IRL footage from a drone or something. Next generation of games gonna be WILD.

  • @crustman5982
    @crustman5982 Рік тому +6

    Actually reminds me a lot of ps1 era seeing an actual photograph as a skybox or texture for the first time

  • @miguelmflowers
    @miguelmflowers Рік тому +36

    Question! Can we 3D print the results? Like, exporting the file as object then trim the parts we don't need to print and then 3D print them? That would be awesome!

    • @thenerfguru
      @thenerfguru  Рік тому +41

      Currently, no. With this project, the goal was novel view synthesis, not discreet solid surfaces. However, additional research has been published that it is possible to extend 3D Gaussian Splatting to solid meshes (that could be converted to 3D printed objects). However, I would not use this technique for 3D modeling. I would use a different technique such as NVIDIA's Neuralangelo.

    • @BrianHockenmaier
      @BrianHockenmaier Рік тому +3

      @@SkeleTonHammer The output of photogrammetry is a point cloud. 3D Gaussians are already a way of displaying point cloud data, so what you're describing is like a lossy conversion from one point cloud to another.
      To get a mesh to use in something like a game engine or 3D printer, there are a bunch of surface reconstruction algorithms to use. Hopefully researchers are working on those too.

    • @Anton_Sh.
      @Anton_Sh. Рік тому

      @@BrianHockenmaier"lossy" ? I guess, it depends on the point of camera view and quality of input data.

    • @BrianHockenmaier
      @BrianHockenmaier Рік тому

      @@Anton_Sh. Necessarily lossy because the input data to a gaussian is a point cloud, and the output data of photogrammetry is a point cloud. There is no more information to garnish from using photogrammetry on screenshots from a gaussian, so essentially the process we're talking about is a lossy conversion of one point cloud to another with a lot of compute in between :)

  • @davidhuculak1099
    @davidhuculak1099 Рік тому +83

    Curious to know how much VRAM this took to run the viewer after it was already trained

    • @nullptr.
      @nullptr. Рік тому +51

      Gaussian does use more VRAM, but that's not a current technological boundary or anything, the only reason current consumer GPUs don't have 32GB+ of VRAM like the NVIDIA Tesla series is because they don't need it. Once games and applications start using Gaussian, specs will follow.

    • @davidhuculak1099
      @davidhuculak1099 Рік тому +7

      Would still be nice to see some hard numbers on it. Any resource could become a bottleneck depending on the type of application, including VRAM

    • @Netsuko
      @Netsuko Рік тому +7

      @@nullptr. Gaussian will not be the catalyst, or at least it seems highly unlikely. This is way too niche and will stay this way for the foreseeable future. I doubt this tech will be used by games in the coming 5+ years so games will not be the driver either because if you develop a game that only runs on 20+ GB, you will leave out the vast majority of gamers. No, what will very likely drive the increase in VRAM will be machine learning. Local Large Language Models and Stable Diffusion.

    • @kubastachu9860
      @kubastachu9860 Рік тому

      @@Netsuko I don't see any need for local language models nor stable diffusion. Some small function-specific networks, yes, but a giant service that's already available in the cloud? Not going to happen.

    • @thenerfguru
      @thenerfguru  Рік тому +21

      Wow, this comment thread went off the rails. TBH, most high VRAM requirements are during the training. Rendering the scene takes less, but it's still not low. I don't have hard numbers. I can run some tests and make a video out of it.
      Instant NeRF on the other hand takes very minimal VRAM to render in real time. But the quality is much lower.

  • @Nytra_
    @Nytra_ Рік тому +1

    Looks so good. Much better than the ugly blobby mess you get with regular photogrammetry techniques.

  • @Zvezdan88
    @Zvezdan88 Рік тому +5

    Can't wait for the tutorial on how to install the damn thing :D

    • @thenerfguru
      @thenerfguru  Рік тому +5

      Still working on it. Sorry for the delay!

  • @ZED-PV
    @ZED-PV Рік тому

    The Gaussian splattering nerd? 😅
    This is mental. Almost everything ive seen people making is almost indistinguishable from a live video from the air/ground.
    Mental! Subbed.

  • @Jandodev
    @Jandodev Рік тому +1

    This is magic I've been playing with NeRF's since last year looks like I missed a bunch!

    • @thenerfguru
      @thenerfguru  Рік тому +2

      This is brand new! So you're staying up to speed 😉

  • @elfnetdesigns702
    @elfnetdesigns702 Рік тому +13

    I used to build those towers for real, lots of them all different but the same.
    in fact I have quite a few part to some of them here in the shop (antennas, ice bridges, heliax connectors and supports, climbing pegs, various peices of equipment from inside the equipment shelter too and I know how to work Blender so I could create one of these sites from pure memory and be near perfect accuracy since they are burned into my brain.

  • @policematrixx
    @policematrixx Рік тому +1

    LOOKS ALMOST REAL

  • @LifeAsANoun
    @LifeAsANoun Рік тому +1

    how did I end up here??? WTF AM I SEEING??? this is amazing!!

  • @h.a.9880
    @h.a.9880 Рік тому +2

    1st thought: this could be so cool for video game graphics
    2nd thought: Holy shit, this would be fantastic to do VR tours of buildings or showcase museum pieces.

    • @thenerfguru
      @thenerfguru  Рік тому

      Yea, now you can easily get it in Unity (see my video on it). From there, virtual tours heck ya!

  • @BinkuSama
    @BinkuSama Рік тому

    This is going to be groundbreaking for new VR tools.

  • @BjarneKort
    @BjarneKort Рік тому +1

    Very excited for that tutorial! So glad I found your new channel haha

  • @jfftck
    @jfftck Рік тому +1

    This looks better than what realtors use for 3D tours of houses, I would love for this to replace that technology. Imagine that a drone could be used instead and be programmed like a robot vacuum to sweep the house and get this level of detail.
    This would end the practice of taking pictures of each room and give the potential buyers an idea of the layout of the house.

    • @thenerfguru
      @thenerfguru  Рік тому

      Have you checked out Luma AI’s new Flythroughs app? They are essentially doing this.

    • @jfftck
      @jfftck Рік тому

      @@thenerfguruI am ready for it to be mainstream, the current technology is headache inducing and should be banned from all usage, with the zoom effect to transition to each waypoint. I have always hated the point to point 3D experience for more reasons: fixed angles, stopping at the entrance of a room, etc…

  • @GMax17
    @GMax17 4 місяці тому

    Amazing effect for static backgrounds

  • @gradientattack
    @gradientattack Рік тому +7

    esto es espectacular, un trabajo genial 👏👏👏👏

  • @WangleLine
    @WangleLine Рік тому

    I'm really excited to use this for cool and weird art

  • @nutzeeer
    @nutzeeer Рік тому +1

    Sooo what you are saying is: we can input any movie and get out a 3D scene?! Wow! Hold on to your papers because this means any movie ever made can now be transformed into a 3D movie!

    • @thenerfguru
      @thenerfguru  Рік тому +1

      Not exactly! You need parallax movement in the imagery. If I have a scene filmed from a stationary position, this would not work. Also, moving objects in the scene become ghostly floaters.

    • @nutzeeer
      @nutzeeer Рік тому

      @@thenerfguru there is AI technology that can lay a depth map over an image, maybe that tech can be combined! basically the AI was trained on 3D data, images with depth. so now it can be run backwards to generate depth on images. I bet my hat this can be combined!

    • @nutzeeer
      @nutzeeer Рік тому

      @@thenerfguru thank you for the answer! I see. so moving objects would require another dimension to be added to the gaussians, to add time. i wonder if a phones lidar scanner can help, or just plain 3D video. maybe iphone will support recording this, together with vision pro true 3D movie playback. that would be rad.

  • @jayvaghasiya_ai
    @jayvaghasiya_ai Рік тому +3

    Waiting for tutorial, great work!

    • @thenerfguru
      @thenerfguru  Рік тому

      It’s posted! ua-cam.com/video/UXtuigy_wYc/v-deo.htmlsi=K2sXGKfp7MyJoFLS

  • @lolmao500
    @lolmao500 Рік тому +4

    It would be so cool if the software would create a 3D world you can be in using this tech. Like put a VR camera in there and boom, youre in the 3d world you scanned.

    • @thenerfguru
      @thenerfguru  Рік тому +3

      I’ve seen VR with a Unity plugin now

  • @nosyb
    @nosyb Рік тому +4

    Reallllly impressive! Eager to test but I read training is happy with 24Go vram...

    • @thenerfguru
      @thenerfguru  Рік тому +5

      That is correct. However, that may change. Stay tuned. It’s technically possible to have less VRAM.

    • @arianaramos1506
      @arianaramos1506 Рік тому +3

      by tweaking the parameters a bit, it's possible to make it work with less memory (around 8Go, depends on the dataset). Quality takes a hit but it's still impressive.

    • @thenerfguru
      @thenerfguru  Рік тому

      @@arianaramos1506 I'd b curious to see what parameter has to be changed. I have not dove too deep into that yet.

    • @arianaramos1506
      @arianaramos1506 Рік тому +2

      @@thenerfguru their FAQ talks about changing either --densify_grad_threshold, --densification_interval or --densify_until_iter. I've tried to increase the first one, which ends up causing less points to be kept and makes the training go faster and ran with less VRAM. Less points decrease overall quality but results were still nice when comparing with InstantNGP with the same graph card.

    • @thenerfguru
      @thenerfguru  Рік тому +2

      @@arianaramos1506 Thank you for the info! I will included that on the "getting started" guide that I am working on for folks who do not have 24 GB of VRAM

  • @celas855
    @celas855 Рік тому +1

    You can count on Google to be the first to widely adopt it and update Google Maps with this technology

    • @thenerfguru
      @thenerfguru  Рік тому +1

      They are one of the largest contributors to radiance field technology in general. For Google Maps Immersive View and Waymo.

  • @naninano8813
    @naninano8813 Рік тому +3

    they use spherical harmonics in splat color function. i like that but why? to model directional lighting effects?

  • @EveBatStudios
    @EveBatStudios Рік тому +1

    I really hope this gets picked up and adopted quickly by companies that are training 3-D generation on nerfs. The biggest issue I’m seeing is resolution. I imagine this is what they were talking about coming in the next update with imagine 3D. Fingers crossed that would be insane.

    • @thenerfguru
      @thenerfguru  Рік тому

      The only blocker right now is licensing. Nerfstudio is looking to do their own version.

  • @rousseauromain692
    @rousseauromain692 7 місяців тому

    Really nice work !!! I am totally fan of what you have achieved. Did you try it with raining weather ? Which app did you use for the helix pattern ? How could we discuss furthermore about your work ?

  • @MonamiTech
    @MonamiTech Рік тому +3

    I think Horizon Zero Dawn's Tallneck would be possible with this technology. Awesome.

  • @gawni1612
    @gawni1612 Рік тому +5

    When he goes outside of the data it's kind of nightmarish and intriguing.

    • @thenerfguru
      @thenerfguru  Рік тому +1

      Haha, that's typical of any radiance field.

    • @crustman5982
      @crustman5982 Рік тому

      Yeah honestly has that feeling of a dream and not in the poetic sense of the word but the actual mind image of a dream you had but barely remember

  • @AdamCarp-r3x
    @AdamCarp-r3x Рік тому +3

    Fascinating. Is it measurable, and if so, what is the accuracy? And do you find any notable cons compared to photogrammetry?

    • @thenerfguru
      @thenerfguru  Рік тому +2

      It’s not measurable like a point cloud or mesh…yet. I bet those tools are coming though.

    • @AndreasMake
      @AndreasMake Рік тому +1

      I mean, if photogrammetry used to be more accurate than LiDAR on non tree-penetrating surveys, imagine this method getting the tools for measurement. 😂

  • @Jddoes3D
    @Jddoes3D Рік тому +1

    Interesting. Can you export data from it? Like to use in a 3D program like 3dsmax?

    • @thenerfguru
      @thenerfguru  Рік тому +1

      You can view it in Unity and UE5 with plugins. More platforms will be building plugins I am sure. It's not a huge leap compared to NeRFs for visualization.

    • @Jddoes3D
      @Jddoes3D Рік тому

      Thanks. Well, will keep an eye for future plugins. Would be awesome to be able to export high quality environments because Google maps ain't cutting it. @@thenerfguru

  • @mbyb6817
    @mbyb6817 Рік тому +3

    Amazing capture! Is it possible to export as an OBJ file?

    • @thenerfguru
      @thenerfguru  Рік тому

      Not with this project. Give it time and I bet you will be able to. Now I don't know about the quality of the textures though.

  • @jorisbonson386
    @jorisbonson386 Рік тому

    That's astonishing

  • @ianritta
    @ianritta Рік тому +3

    This is so cool. Are these methods possibly the future of photogrammetry? I only recently started becoming familiar with the subject while researching computer vision.

    • @thenerfguru
      @thenerfguru  Рік тому +3

      I would consider this to be a visualization layer that is complimentary.

  • @AIWarper
    @AIWarper Рік тому

    Any tips on best practices for the drone capture here? I am hoping to do my first drone flight this weekend and it would be great to have some tips on dos and don’ts

  • @Povilaz
    @Povilaz Рік тому

    This is _insane,_ holy shit...

  • @roembol
    @roembol Рік тому +4

    Has anyone tried doing this but for a virtual world made in something like blender? It would be cool if you could get some insane graphics in real time

    • @thenerfguru
      @thenerfguru  Рік тому +2

      I did a video using Unity. Blender and UE5 are coming.

  • @Stringularity
    @Stringularity Рік тому

    Awesome! Any tuts on setting the camera movements and recording/exporting them?

    • @thenerfguru
      @thenerfguru  Рік тому +1

      Your best bet is to use the nerfstudio viewer. Here is a tutorial: ua-cam.com/video/A1Gbycj0bWw/v-deo.html

  • @benoitperrin6243
    @benoitperrin6243 Рік тому +4

    Great work as always! Is this also lighter to render besides the quality?

    • @thenerfguru
      @thenerfguru  Рік тому +8

      Training takes more time than other fast NeRFs. However, viewing it once trained is lighter which allows it to run in real-time.

    • @benoitperrin6243
      @benoitperrin6243 Рік тому

      @@thenerfguru that’s amazing! I wonder if the final render could be hosted on html5/on the web

    • @benoitperrin6243
      @benoitperrin6243 Рік тому

      @@thenerfguru Do yu know if the the final render could be hosted on html5/on the web?

    • @slonkazoid
      @slonkazoid Рік тому +1

      @@benoitperrin6243 no reason why this couldn't be done with WASM

  • @HamguyBacon
    @HamguyBacon Рік тому +1

    I don't understand why they don't just use game view like a mouse to look around.
    its amazing that it even has real time reflections.

    • @thenerfguru
      @thenerfguru  Рік тому +1

      It’s because it was built on a viewer that is used for comparing datasets. A few different projects have integrated the tech with game engines that have better nav.

  • @oowaz
    @oowaz Рік тому

    this tech is amazing

  • @jag24x
    @jag24x Рік тому +2

    Possible for a video on how to install and run it? What is the minimum compute for the GPU for this to work? Great video!

    • @thenerfguru
      @thenerfguru  Рік тому +4

      I am making a video this week. At minimum, you need a GPU with 24 GB of VRAM.

    • @SuleBandi
      @SuleBandi Рік тому

      ​@thenerfguru what's your input? Video, image?

    • @fusseldieb
      @fusseldieb Рік тому +1

      @@SuleBandi He literally mentioned it. 300 photos.

  • @Dx_Ach
    @Dx_Ach Рік тому

    Its gonna be insane if and when google maps or google earth uses this method.

  • @jooptablet1727
    @jooptablet1727 Рік тому

    What are your input images like resolution wise and does the training python script downsize it to 1.6k pixels for you too? Because my results aren't nearly as clear and high res as yours. Lots of "white fog". My training also takes a lot less time (but maybe that's thanks to the 4090 i bought specifically for generative AI stuff)... Anyway thanks for being one of the few people who's covering this technique right now!

  • @ScottSquires
    @ScottSquires Рік тому +2

    Which software and version?

    • @thenerfguru
      @thenerfguru  Рік тому +2

      It's a project on GitHub: github.com/graphdeco-inria/gaussian-splatting

  • @Cam_Wight
    @Cam_Wight Рік тому

    This will be great if we could get it into games. Just need to figure out the lighting

  • @marinvalentinwolf
    @marinvalentinwolf Рік тому +1

    is there any way to import these gaussian splatters as assets / point cloud voxel objects into a game engine like unreal?

    • @thenerfguru
      @thenerfguru  Рік тому

      Yes. Unity import is my next video. The one after that is Unreal Engine

  • @VaultVesuvio
    @VaultVesuvio Рік тому +2

    I wonder how this can help with 3D architecture visualization. Any thoughts on this? Are you able to save this out as a obj, fbx, etc.? Can you import 3D objects into this program?

    • @myth0genesis
      @myth0genesis Рік тому

      I think there'd have to be some intermediate process, as Gaussian splatting, at least as I'm familiar with it in 2D, is just a series of monochromatic brush-stroke-looking swatches overlaid with each other. Looking at this demo, that's what seems to be going on here, too. If you want a clearer picture of what's going on, search "Gaussian Splatting" in Shadertoy and it'll show you how they're put together to make an image.

    • @thenerfguru
      @thenerfguru  Рік тому +1

      Yea, too soon. Come back in 6 months to a year.

  • @paulmdevenney
    @paulmdevenney Рік тому

    This is crazy. How good is it when you start adding things to the scene that were not there? Can you? Does it stand out like a sore thumb?

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh Рік тому +1

    Noice. I was really disappoint with NeRF, the hype was there but when I tried it out it was never anything close to the demo's presented. I'll have to give this a try. Question though, how capable is this to do things like measure features or export 3d assets? Obviously some scaling or reference lengths would be supplied.

    • @thenerfguru
      @thenerfguru  Рік тому

      If your goal is measuring surfaces or 3D Geometry in general, I suggest Neuralangelo.

  • @Morenob1
    @Morenob1 Рік тому

    This is great and all but not for closeups, this would be beneficial if a 3D program can recreate the landscape with it's own assets and textures for fast level productions.

  • @phitc4242
    @phitc4242 Рік тому

    SAO really be coming to life

  • @ronskiuk
    @ronskiuk Рік тому +2

    Crazy the cable detail, does that mesh?

    • @thenerfguru
      @thenerfguru  Рік тому +6

      This implementation does not mesh. Stay tuned for my experiments with Nueralangelo. That meshes!

    • @ronskiuk
      @ronskiuk Рік тому

      Ah yeah I saw they released that now, too many toys to play with :) @@thenerfguru

  • @bradley3549
    @bradley3549 Рік тому

    What hardware was used for your training? Is this within the realm of possibility with a DJI Mini 3 Pro and 5950x CPU/RTX 4090 GPU?

  • @leandrosn962
    @leandrosn962 Рік тому +2

    That's amazing. Is it possible to import into unreal engine?

  • @henryogan2017
    @henryogan2017 Рік тому +1

    Cool! But how is it different from NeRFs?

    • @thenerfguru
      @thenerfguru  Рік тому +1

      That warrants it's own video. However, in a sentence, 3DGS generates a scene similar to a dense point cloud (but the points are splats) that can easily be rendered by your GPU. So you get really good visuals that run at 100 fps in real-time.

  • @reallyreallyronron
    @reallyreallyronron Рік тому

    Are you going to make a tutorial on this? It’s so awesome

    • @thenerfguru
      @thenerfguru  Рік тому

      This tutorial?
      Getting Started With 3D Gaussian Splats for Windows (Beginner Tutorial)
      ua-cam.com/video/UXtuigy_wYc/v-deo.html

  • @ozymandias8523
    @ozymandias8523 Рік тому

    Imagine games and vr with this

    • @thenerfguru
      @thenerfguru  Рік тому

      Oh yes! This is also a fast way to world-build (at least when you want a duplicate of a real environment). Google Earth VR would be interesting using this technology. They are already diving into NeRFs.

  • @dany3020
    @dany3020 Рік тому +1

    But can you export the 3D model in a standard format?

    • @thenerfguru
      @thenerfguru  Рік тому

      Not with this code implementation. It’s possible though

  • @3govideo
    @3govideo Рік тому

    Could you then pass an image recognition program to identify cracks on the antennas?

  • @PurpleFX1
    @PurpleFX1 Рік тому +1

    Can this be the future for street view for google maps?

    • @thenerfguru
      @thenerfguru  Рік тому

      I wouldn’t doubt if this tech is part of the new immersive view

  • @B0bik
    @B0bik Рік тому

    What´s the difference between this and photogrametry?

  • @luckyyluck
    @luckyyluck Рік тому

    this works only for static objects? you cant have moving objects or even dynamic lights, its just a 3d Photo at the end

  • @yuhunk8053
    @yuhunk8053 5 місяців тому

    Is this software for viewing 3DGS models open source? Can you provide it?

  • @TheFxdstudios
    @TheFxdstudios Рік тому +1

    Impressive!

  • @djayjp
    @djayjp Рік тому +1

    Could build an entire game using real life scenes....

  • @gaussiansplatsss
    @gaussiansplatsss 4 місяці тому

    do I need an Nvidia graphics card to use this gaussian splatting or integrated graphics is enough

  • @defryingpan4290
    @defryingpan4290 Рік тому

    I’m interested if you can give this thing a collision mesh, because from there you can make an FPS that people can add maps to just from images.

    • @thenerfguru
      @thenerfguru  Рік тому

      Not sure. Maybe in UE5. Or, you can have an invisible mesh layer behind the data.

  • @framebyframegames
    @framebyframegames Рік тому

    How much storage space does this interactive scene take up?

  • @kwea123
    @kwea123 Рік тому +1

    more please🤗

    • @thenerfguru
      @thenerfguru  Рік тому

      Definitely posting more thorough content soon. I dove deep into the paper last night trying to wrap my head around what we’re actually viewing.

  • @stancartmankenny
    @stancartmankenny Рік тому

    do the drones also capture depth data? If not, how does this system know where in 3d space to put each "splat"?

    • @thenerfguru
      @thenerfguru  Рік тому

      Structure from motion...also it doesn't have to accurately place splats. It just needs to mimic the appearance of accuracy.

  • @matveyshishov
    @matveyshishov Рік тому +2

    Sorry for a newbie question, I watched this video for the math.
    What happens if the guy by the van moves? If I understand it correctly, the 3D scene is reconstructed from a set of photo images? How do you deal with spatial changes between frames, like moving objects or lighting changes?

    • @trucid2
      @trucid2 Рік тому +1

      Just a guess, but there could be artifacts due to the motion. That's what happened when I was making panoramic shots out of dozens of photographs.

    • @Nik-dz1yc
      @Nik-dz1yc Рік тому +2

      The scene (set of gaussians with their attributes) is trained using a backpropagation algorithm, just like how neural networks are trained so im guessing if two different images show differences, it will sort of blur them together

    • @thenerfguru
      @thenerfguru  Рік тому

      @@Nik-dz1yc Correct. You end up with smears or ghosts in the data. If the person is static for several images and then moves, you may have a clear person and a ghost.

  • @RED40HOURS
    @RED40HOURS Рік тому

    soo trippy!!

  • @hasszhao
    @hasszhao Рік тому

    Can you provide the footage of this demo?

  • @leosmi1
    @leosmi1 Рік тому

    very good

  • @badxstudio
    @badxstudio Рік тому +2

    This looks great. How can we contact you? Email or Insta or Twitter?

  • @sprtndlx
    @sprtndlx Рік тому

    wonder if google could use this for maps in some areas

  • @spuzzdawg
    @spuzzdawg Рік тому

    I'm curious as to whether you had permission from the tower operator/owner to fly in such proximity to it. In my previous job, I was the manager of radio services for Air Traffic Control and they were often run off towers like this. Your drone being so close would potentially have interfered with our services, it would have potentially also been illegal to fly there.

    • @thenerfguru
      @thenerfguru  Рік тому

      Yes, this was a test site we had clearance at. I wouldn't in my right mind fly around any random tower.

  • @mahmood392
    @mahmood392 Рік тому +1

    any tutorial or GUI implmementation to train this?

    • @thenerfguru
      @thenerfguru  Рік тому

      Hoping to post it tonight

    • @krystiankrysti1396
      @krystiankrysti1396 Рік тому

      @@thenerfguru that would be neat, im having issues installing this on win10

  • @projectnemesi5950
    @projectnemesi5950 Рік тому

    Dynamic environment possible?

  • @1.11-y1z
    @1.11-y1z Рік тому

    How dynamic is it.

  • @arfaxad2137
    @arfaxad2137 Рік тому

    When game developrs will start using this?

  • @visualstoryteller6158
    @visualstoryteller6158 Рік тому +2

    Walkthrough of This to unreal or blender please..

    • @thenerfguru
      @thenerfguru  Рік тому

      Currently, 3D gaussian splatting is not supported in either UE or Blender. I'll keep everyone posted if it does.

    • @visualstoryteller6158
      @visualstoryteller6158 Рік тому +1

      @@thenerfguru coz its so clean n no space/object ghosting like NErf, even with baked textures out for temp use this would be better. It does create mesh right? Hv so many questions but excited

    • @thenerfguru
      @thenerfguru  Рік тому

      @@visualstoryteller6158 There are no meshes in this scene. You are looking at hundreds of thousands of overlapping 3D Gaussian Splats. You can still get ghosting/floaters with this technique. That is usually do to how you capture and lighting conditions. Less to do with the technology itself.

  • @jimj2683
    @jimj2683 Рік тому +207

    I hope google will fly swarms of drones to scan the entire planet for google earth.

    • @thenerfguru
      @thenerfguru  Рік тому +23

      Haha! That would be cool.

    • @dyhnen8977
      @dyhnen8977 Рік тому +93

      No that would be invasive

    • @melol69
      @melol69 Рік тому +5

      street view

    • @chadorr795
      @chadorr795 Рік тому +16

      Have you played the newest Flight Simulator game? It’s getting pretty close.

    • @FF-xw8gs
      @FF-xw8gs Рік тому

      ​@@dyhnen8977They already do this with satellites, the only difference is that you will be able to see the drones, while the satellites cannot

  • @mastershooter64
    @mastershooter64 Рік тому

    Does it work with dynamic lighting?

    • @thenerfguru
      @thenerfguru  Рік тому

      This specific project does not. I bet someone could write a viewer that does.

  • @tamer27antepli
    @tamer27antepli Рік тому

    This could be awesome for Google maps

    • @thenerfguru
      @thenerfguru  Рік тому +3

      I assure you Google is already ahead of the curve on this technology! They are behind a lot more radiance field research than you realize.

  • @vincentnguyen3068
    @vincentnguyen3068 Рік тому

    I want Starcraft with 3D Gaussisn Splatting now

  • @nutzeeer
    @nutzeeer Рік тому +1

    With how empty atoms are they might as well be splats

  • @AMR2442
    @AMR2442 Рік тому +1

    Tutorial?

    • @thenerfguru
      @thenerfguru  Рік тому

      Getting Started With 3D Gaussian Splats for Windows (Beginner Tutorial)
      ua-cam.com/video/UXtuigy_wYc/v-deo.html

  • @frankdaze2353
    @frankdaze2353 Рік тому

    Who can I pay to turn footage or photos into NeRFs?

  • @AndreasMake
    @AndreasMake Рік тому +1

    I mean, if photogrammetry used to be more accurate than LiDAR on non tree-penetrating surveys, imagine this method getting the tools for measurement. 😂

    • @thenerfguru
      @thenerfguru  Рік тому +1

      I can imagine it! We'll get there.

  • @377omkar
    @377omkar Рік тому

    How to do it ??

  • @goyimlmfao
    @goyimlmfao Рік тому

    this could be a great feature for construction bidders doing site visits virtually

  • @evdrivertk
    @evdrivertk Рік тому

    Excellent modelling! Do you use a 3D mouse or a Space mouse for better movement around the image?

  • @MrJC1
    @MrJC1 Рік тому

    Can anyone remember Euclideon Infinite Detail engine from ages back? Pretty sure they were australian, or the main geezer was. This seems like an evolution of that general idea. Is it? Lol.

    • @thenerfguru
      @thenerfguru  Рік тому

      Wasn't that a hoax?

    • @MrJC1
      @MrJC1 Рік тому

      @@thenerfguru no. It became a legit product... udStream or something. It was massively overblown and no good for games because of the lighting difficulties. Plus at the time they couldnt reorient objects and stuff. But that was ages ago. It was called Unlimited Detail not Infinite Detail. My bad.

  • @TheGugustar
    @TheGugustar Рік тому

    How big is the final file?

    • @thenerfguru
      @thenerfguru  Рік тому +1

      This specific one is around 1GB. Most projects I have done are between .5 and 2 GB.