Question! Can we 3D print the results? Like, exporting the file as object then trim the parts we don't need to print and then 3D print them? That would be awesome!
Currently, no. With this project, the goal was novel view synthesis, not discreet solid surfaces. However, additional research has been published that it is possible to extend 3D Gaussian Splatting to solid meshes (that could be converted to 3D printed objects). However, I would not use this technique for 3D modeling. I would use a different technique such as NVIDIA's Neuralangelo.
@@SkeleTonHammer The output of photogrammetry is a point cloud. 3D Gaussians are already a way of displaying point cloud data, so what you're describing is like a lossy conversion from one point cloud to another. To get a mesh to use in something like a game engine or 3D printer, there are a bunch of surface reconstruction algorithms to use. Hopefully researchers are working on those too.
@@Anton_Sh. Necessarily lossy because the input data to a gaussian is a point cloud, and the output data of photogrammetry is a point cloud. There is no more information to garnish from using photogrammetry on screenshots from a gaussian, so essentially the process we're talking about is a lossy conversion of one point cloud to another with a lot of compute in between :)
Gaussian does use more VRAM, but that's not a current technological boundary or anything, the only reason current consumer GPUs don't have 32GB+ of VRAM like the NVIDIA Tesla series is because they don't need it. Once games and applications start using Gaussian, specs will follow.
@@nullptr. Gaussian will not be the catalyst, or at least it seems highly unlikely. This is way too niche and will stay this way for the foreseeable future. I doubt this tech will be used by games in the coming 5+ years so games will not be the driver either because if you develop a game that only runs on 20+ GB, you will leave out the vast majority of gamers. No, what will very likely drive the increase in VRAM will be machine learning. Local Large Language Models and Stable Diffusion.
@@Netsuko I don't see any need for local language models nor stable diffusion. Some small function-specific networks, yes, but a giant service that's already available in the cloud? Not going to happen.
Wow, this comment thread went off the rails. TBH, most high VRAM requirements are during the training. Rendering the scene takes less, but it's still not low. I don't have hard numbers. I can run some tests and make a video out of it. Instant NeRF on the other hand takes very minimal VRAM to render in real time. But the quality is much lower.
The Gaussian splattering nerd? 😅 This is mental. Almost everything ive seen people making is almost indistinguishable from a live video from the air/ground. Mental! Subbed.
I used to build those towers for real, lots of them all different but the same. in fact I have quite a few part to some of them here in the shop (antennas, ice bridges, heliax connectors and supports, climbing pegs, various peices of equipment from inside the equipment shelter too and I know how to work Blender so I could create one of these sites from pure memory and be near perfect accuracy since they are burned into my brain.
1st thought: this could be so cool for video game graphics 2nd thought: Holy shit, this would be fantastic to do VR tours of buildings or showcase museum pieces.
This looks better than what realtors use for 3D tours of houses, I would love for this to replace that technology. Imagine that a drone could be used instead and be programmed like a robot vacuum to sweep the house and get this level of detail. This would end the practice of taking pictures of each room and give the potential buyers an idea of the layout of the house.
@@thenerfguruI am ready for it to be mainstream, the current technology is headache inducing and should be banned from all usage, with the zoom effect to transition to each waypoint. I have always hated the point to point 3D experience for more reasons: fixed angles, stopping at the entrance of a room, etc…
Sooo what you are saying is: we can input any movie and get out a 3D scene?! Wow! Hold on to your papers because this means any movie ever made can now be transformed into a 3D movie!
Not exactly! You need parallax movement in the imagery. If I have a scene filmed from a stationary position, this would not work. Also, moving objects in the scene become ghostly floaters.
@@thenerfguru there is AI technology that can lay a depth map over an image, maybe that tech can be combined! basically the AI was trained on 3D data, images with depth. so now it can be run backwards to generate depth on images. I bet my hat this can be combined!
@@thenerfguru thank you for the answer! I see. so moving objects would require another dimension to be added to the gaussians, to add time. i wonder if a phones lidar scanner can help, or just plain 3D video. maybe iphone will support recording this, together with vision pro true 3D movie playback. that would be rad.
It would be so cool if the software would create a 3D world you can be in using this tech. Like put a VR camera in there and boom, youre in the 3d world you scanned.
by tweaking the parameters a bit, it's possible to make it work with less memory (around 8Go, depends on the dataset). Quality takes a hit but it's still impressive.
@@thenerfguru their FAQ talks about changing either --densify_grad_threshold, --densification_interval or --densify_until_iter. I've tried to increase the first one, which ends up causing less points to be kept and makes the training go faster and ran with less VRAM. Less points decrease overall quality but results were still nice when comparing with InstantNGP with the same graph card.
@@arianaramos1506 Thank you for the info! I will included that on the "getting started" guide that I am working on for folks who do not have 24 GB of VRAM
I really hope this gets picked up and adopted quickly by companies that are training 3-D generation on nerfs. The biggest issue I’m seeing is resolution. I imagine this is what they were talking about coming in the next update with imagine 3D. Fingers crossed that would be insane.
Really nice work !!! I am totally fan of what you have achieved. Did you try it with raining weather ? Which app did you use for the helix pattern ? How could we discuss furthermore about your work ?
I mean, if photogrammetry used to be more accurate than LiDAR on non tree-penetrating surveys, imagine this method getting the tools for measurement. 😂
You can view it in Unity and UE5 with plugins. More platforms will be building plugins I am sure. It's not a huge leap compared to NeRFs for visualization.
Thanks. Well, will keep an eye for future plugins. Would be awesome to be able to export high quality environments because Google maps ain't cutting it. @@thenerfguru
This is so cool. Are these methods possibly the future of photogrammetry? I only recently started becoming familiar with the subject while researching computer vision.
Any tips on best practices for the drone capture here? I am hoping to do my first drone flight this weekend and it would be great to have some tips on dos and don’ts
Has anyone tried doing this but for a virtual world made in something like blender? It would be cool if you could get some insane graphics in real time
It’s because it was built on a viewer that is used for comparing datasets. A few different projects have integrated the tech with game engines that have better nav.
What are your input images like resolution wise and does the training python script downsize it to 1.6k pixels for you too? Because my results aren't nearly as clear and high res as yours. Lots of "white fog". My training also takes a lot less time (but maybe that's thanks to the 4090 i bought specifically for generative AI stuff)... Anyway thanks for being one of the few people who's covering this technique right now!
I wonder how this can help with 3D architecture visualization. Any thoughts on this? Are you able to save this out as a obj, fbx, etc.? Can you import 3D objects into this program?
I think there'd have to be some intermediate process, as Gaussian splatting, at least as I'm familiar with it in 2D, is just a series of monochromatic brush-stroke-looking swatches overlaid with each other. Looking at this demo, that's what seems to be going on here, too. If you want a clearer picture of what's going on, search "Gaussian Splatting" in Shadertoy and it'll show you how they're put together to make an image.
Noice. I was really disappoint with NeRF, the hype was there but when I tried it out it was never anything close to the demo's presented. I'll have to give this a try. Question though, how capable is this to do things like measure features or export 3d assets? Obviously some scaling or reference lengths would be supplied.
This is great and all but not for closeups, this would be beneficial if a 3D program can recreate the landscape with it's own assets and textures for fast level productions.
That warrants it's own video. However, in a sentence, 3DGS generates a scene similar to a dense point cloud (but the points are splats) that can easily be rendered by your GPU. So you get really good visuals that run at 100 fps in real-time.
Oh yes! This is also a fast way to world-build (at least when you want a duplicate of a real environment). Google Earth VR would be interesting using this technology. They are already diving into NeRFs.
Sorry for a newbie question, I watched this video for the math. What happens if the guy by the van moves? If I understand it correctly, the 3D scene is reconstructed from a set of photo images? How do you deal with spatial changes between frames, like moving objects or lighting changes?
The scene (set of gaussians with their attributes) is trained using a backpropagation algorithm, just like how neural networks are trained so im guessing if two different images show differences, it will sort of blur them together
@@Nik-dz1yc Correct. You end up with smears or ghosts in the data. If the person is static for several images and then moves, you may have a clear person and a ghost.
I'm curious as to whether you had permission from the tower operator/owner to fly in such proximity to it. In my previous job, I was the manager of radio services for Air Traffic Control and they were often run off towers like this. Your drone being so close would potentially have interfered with our services, it would have potentially also been illegal to fly there.
@@thenerfguru coz its so clean n no space/object ghosting like NErf, even with baked textures out for temp use this would be better. It does create mesh right? Hv so many questions but excited
@@visualstoryteller6158 There are no meshes in this scene. You are looking at hundreds of thousands of overlapping 3D Gaussian Splats. You can still get ghosting/floaters with this technique. That is usually do to how you capture and lighting conditions. Less to do with the technology itself.
I mean, if photogrammetry used to be more accurate than LiDAR on non tree-penetrating surveys, imagine this method getting the tools for measurement. 😂
Can anyone remember Euclideon Infinite Detail engine from ages back? Pretty sure they were australian, or the main geezer was. This seems like an evolution of that general idea. Is it? Lol.
@@thenerfguru no. It became a legit product... udStream or something. It was massively overblown and no good for games because of the lighting difficulties. Plus at the time they couldnt reorient objects and stuff. But that was ages ago. It was called Unlimited Detail not Infinite Detail. My bad.
Transcendental progress, happening so suddenly. Just a year ago, NeRF already seemed like magic! ^‿^
So true!
absolutely useless. requires too much real time processing
What's nerf?
@@MrQuest0 "Neural Radiance Field"
Transcendent rather then transcendental...
Gaussian Splatting totally reminds me of the 3D crime scene scans like in Star Trek: Into Darkness. Amazing stuff.
Totally!
Using this for crime scenes would actually be pretty useful.
@@Axodussame though! 💯💯
i am 38 years old, working with AI, 3D software like blender and still this blowed my mind like back when i was 14 .... wow.
I’m 38 years old, works in 3D software, 3D reconstruction from imagery, etc and I was blown away!
@@thenerfguru will try it the next days with my rtx 3070 and wish for the best, haha.
@@artavenuebln Keep us updated! I want to know how it performs!
How do I use this ? Do I need unreal engine
Google maps is going to look insane if this gets implemented.
Excellent results, but props to your flying pattern and skills, the quality of input is crucial, can you give us more information about the flight?
I plan on it. Reminder that I need to do an episode on image capture.
If I didn't see the UI on the thing, I'd legit think it's some IRL footage from a drone or something. Next generation of games gonna be WILD.
This IS wild!
Actually reminds me a lot of ps1 era seeing an actual photograph as a skybox or texture for the first time
Love it!
Question! Can we 3D print the results? Like, exporting the file as object then trim the parts we don't need to print and then 3D print them? That would be awesome!
Currently, no. With this project, the goal was novel view synthesis, not discreet solid surfaces. However, additional research has been published that it is possible to extend 3D Gaussian Splatting to solid meshes (that could be converted to 3D printed objects). However, I would not use this technique for 3D modeling. I would use a different technique such as NVIDIA's Neuralangelo.
@@SkeleTonHammer The output of photogrammetry is a point cloud. 3D Gaussians are already a way of displaying point cloud data, so what you're describing is like a lossy conversion from one point cloud to another.
To get a mesh to use in something like a game engine or 3D printer, there are a bunch of surface reconstruction algorithms to use. Hopefully researchers are working on those too.
@@BrianHockenmaier"lossy" ? I guess, it depends on the point of camera view and quality of input data.
@@Anton_Sh. Necessarily lossy because the input data to a gaussian is a point cloud, and the output data of photogrammetry is a point cloud. There is no more information to garnish from using photogrammetry on screenshots from a gaussian, so essentially the process we're talking about is a lossy conversion of one point cloud to another with a lot of compute in between :)
Curious to know how much VRAM this took to run the viewer after it was already trained
Gaussian does use more VRAM, but that's not a current technological boundary or anything, the only reason current consumer GPUs don't have 32GB+ of VRAM like the NVIDIA Tesla series is because they don't need it. Once games and applications start using Gaussian, specs will follow.
Would still be nice to see some hard numbers on it. Any resource could become a bottleneck depending on the type of application, including VRAM
@@nullptr. Gaussian will not be the catalyst, or at least it seems highly unlikely. This is way too niche and will stay this way for the foreseeable future. I doubt this tech will be used by games in the coming 5+ years so games will not be the driver either because if you develop a game that only runs on 20+ GB, you will leave out the vast majority of gamers. No, what will very likely drive the increase in VRAM will be machine learning. Local Large Language Models and Stable Diffusion.
@@Netsuko I don't see any need for local language models nor stable diffusion. Some small function-specific networks, yes, but a giant service that's already available in the cloud? Not going to happen.
Wow, this comment thread went off the rails. TBH, most high VRAM requirements are during the training. Rendering the scene takes less, but it's still not low. I don't have hard numbers. I can run some tests and make a video out of it.
Instant NeRF on the other hand takes very minimal VRAM to render in real time. But the quality is much lower.
Looks so good. Much better than the ugly blobby mess you get with regular photogrammetry techniques.
Can't wait for the tutorial on how to install the damn thing :D
Still working on it. Sorry for the delay!
The Gaussian splattering nerd? 😅
This is mental. Almost everything ive seen people making is almost indistinguishable from a live video from the air/ground.
Mental! Subbed.
Thanks! This technology is hot!
This is magic I've been playing with NeRF's since last year looks like I missed a bunch!
This is brand new! So you're staying up to speed 😉
I used to build those towers for real, lots of them all different but the same.
in fact I have quite a few part to some of them here in the shop (antennas, ice bridges, heliax connectors and supports, climbing pegs, various peices of equipment from inside the equipment shelter too and I know how to work Blender so I could create one of these sites from pure memory and be near perfect accuracy since they are burned into my brain.
That's crazy cool!
LOOKS ALMOST REAL
Yes! Crazy good!
how did I end up here??? WTF AM I SEEING??? this is amazing!!
Hahaha! It’s next level!
1st thought: this could be so cool for video game graphics
2nd thought: Holy shit, this would be fantastic to do VR tours of buildings or showcase museum pieces.
Yea, now you can easily get it in Unity (see my video on it). From there, virtual tours heck ya!
This is going to be groundbreaking for new VR tools.
Very excited for that tutorial! So glad I found your new channel haha
It’s posted!
This looks better than what realtors use for 3D tours of houses, I would love for this to replace that technology. Imagine that a drone could be used instead and be programmed like a robot vacuum to sweep the house and get this level of detail.
This would end the practice of taking pictures of each room and give the potential buyers an idea of the layout of the house.
Have you checked out Luma AI’s new Flythroughs app? They are essentially doing this.
@@thenerfguruI am ready for it to be mainstream, the current technology is headache inducing and should be banned from all usage, with the zoom effect to transition to each waypoint. I have always hated the point to point 3D experience for more reasons: fixed angles, stopping at the entrance of a room, etc…
Amazing effect for static backgrounds
esto es espectacular, un trabajo genial 👏👏👏👏
Thank you!
I'm really excited to use this for cool and weird art
Yaaas!
Sooo what you are saying is: we can input any movie and get out a 3D scene?! Wow! Hold on to your papers because this means any movie ever made can now be transformed into a 3D movie!
Not exactly! You need parallax movement in the imagery. If I have a scene filmed from a stationary position, this would not work. Also, moving objects in the scene become ghostly floaters.
@@thenerfguru there is AI technology that can lay a depth map over an image, maybe that tech can be combined! basically the AI was trained on 3D data, images with depth. so now it can be run backwards to generate depth on images. I bet my hat this can be combined!
@@thenerfguru thank you for the answer! I see. so moving objects would require another dimension to be added to the gaussians, to add time. i wonder if a phones lidar scanner can help, or just plain 3D video. maybe iphone will support recording this, together with vision pro true 3D movie playback. that would be rad.
Waiting for tutorial, great work!
It’s posted! ua-cam.com/video/UXtuigy_wYc/v-deo.htmlsi=K2sXGKfp7MyJoFLS
It would be so cool if the software would create a 3D world you can be in using this tech. Like put a VR camera in there and boom, youre in the 3d world you scanned.
I’ve seen VR with a Unity plugin now
Reallllly impressive! Eager to test but I read training is happy with 24Go vram...
That is correct. However, that may change. Stay tuned. It’s technically possible to have less VRAM.
by tweaking the parameters a bit, it's possible to make it work with less memory (around 8Go, depends on the dataset). Quality takes a hit but it's still impressive.
@@arianaramos1506 I'd b curious to see what parameter has to be changed. I have not dove too deep into that yet.
@@thenerfguru their FAQ talks about changing either --densify_grad_threshold, --densification_interval or --densify_until_iter. I've tried to increase the first one, which ends up causing less points to be kept and makes the training go faster and ran with less VRAM. Less points decrease overall quality but results were still nice when comparing with InstantNGP with the same graph card.
@@arianaramos1506 Thank you for the info! I will included that on the "getting started" guide that I am working on for folks who do not have 24 GB of VRAM
You can count on Google to be the first to widely adopt it and update Google Maps with this technology
They are one of the largest contributors to radiance field technology in general. For Google Maps Immersive View and Waymo.
they use spherical harmonics in splat color function. i like that but why? to model directional lighting effects?
Yes, that’s my understanding.
I really hope this gets picked up and adopted quickly by companies that are training 3-D generation on nerfs. The biggest issue I’m seeing is resolution. I imagine this is what they were talking about coming in the next update with imagine 3D. Fingers crossed that would be insane.
The only blocker right now is licensing. Nerfstudio is looking to do their own version.
Really nice work !!! I am totally fan of what you have achieved. Did you try it with raining weather ? Which app did you use for the helix pattern ? How could we discuss furthermore about your work ?
I think Horizon Zero Dawn's Tallneck would be possible with this technology. Awesome.
🔥
When he goes outside of the data it's kind of nightmarish and intriguing.
Haha, that's typical of any radiance field.
Yeah honestly has that feeling of a dream and not in the poetic sense of the word but the actual mind image of a dream you had but barely remember
Fascinating. Is it measurable, and if so, what is the accuracy? And do you find any notable cons compared to photogrammetry?
It’s not measurable like a point cloud or mesh…yet. I bet those tools are coming though.
I mean, if photogrammetry used to be more accurate than LiDAR on non tree-penetrating surveys, imagine this method getting the tools for measurement. 😂
Interesting. Can you export data from it? Like to use in a 3D program like 3dsmax?
You can view it in Unity and UE5 with plugins. More platforms will be building plugins I am sure. It's not a huge leap compared to NeRFs for visualization.
Thanks. Well, will keep an eye for future plugins. Would be awesome to be able to export high quality environments because Google maps ain't cutting it. @@thenerfguru
Amazing capture! Is it possible to export as an OBJ file?
Not with this project. Give it time and I bet you will be able to. Now I don't know about the quality of the textures though.
That's astonishing
This is so cool. Are these methods possibly the future of photogrammetry? I only recently started becoming familiar with the subject while researching computer vision.
I would consider this to be a visualization layer that is complimentary.
Any tips on best practices for the drone capture here? I am hoping to do my first drone flight this weekend and it would be great to have some tips on dos and don’ts
This is _insane,_ holy shit...
Has anyone tried doing this but for a virtual world made in something like blender? It would be cool if you could get some insane graphics in real time
I did a video using Unity. Blender and UE5 are coming.
Awesome! Any tuts on setting the camera movements and recording/exporting them?
Your best bet is to use the nerfstudio viewer. Here is a tutorial: ua-cam.com/video/A1Gbycj0bWw/v-deo.html
Great work as always! Is this also lighter to render besides the quality?
Training takes more time than other fast NeRFs. However, viewing it once trained is lighter which allows it to run in real-time.
@@thenerfguru that’s amazing! I wonder if the final render could be hosted on html5/on the web
@@thenerfguru Do yu know if the the final render could be hosted on html5/on the web?
@@benoitperrin6243 no reason why this couldn't be done with WASM
I don't understand why they don't just use game view like a mouse to look around.
its amazing that it even has real time reflections.
It’s because it was built on a viewer that is used for comparing datasets. A few different projects have integrated the tech with game engines that have better nav.
this tech is amazing
Possible for a video on how to install and run it? What is the minimum compute for the GPU for this to work? Great video!
I am making a video this week. At minimum, you need a GPU with 24 GB of VRAM.
@thenerfguru what's your input? Video, image?
@@SuleBandi He literally mentioned it. 300 photos.
Its gonna be insane if and when google maps or google earth uses this method.
What are your input images like resolution wise and does the training python script downsize it to 1.6k pixels for you too? Because my results aren't nearly as clear and high res as yours. Lots of "white fog". My training also takes a lot less time (but maybe that's thanks to the 4090 i bought specifically for generative AI stuff)... Anyway thanks for being one of the few people who's covering this technique right now!
Which software and version?
It's a project on GitHub: github.com/graphdeco-inria/gaussian-splatting
This will be great if we could get it into games. Just need to figure out the lighting
is there any way to import these gaussian splatters as assets / point cloud voxel objects into a game engine like unreal?
Yes. Unity import is my next video. The one after that is Unreal Engine
I wonder how this can help with 3D architecture visualization. Any thoughts on this? Are you able to save this out as a obj, fbx, etc.? Can you import 3D objects into this program?
I think there'd have to be some intermediate process, as Gaussian splatting, at least as I'm familiar with it in 2D, is just a series of monochromatic brush-stroke-looking swatches overlaid with each other. Looking at this demo, that's what seems to be going on here, too. If you want a clearer picture of what's going on, search "Gaussian Splatting" in Shadertoy and it'll show you how they're put together to make an image.
Yea, too soon. Come back in 6 months to a year.
This is crazy. How good is it when you start adding things to the scene that were not there? Can you? Does it stand out like a sore thumb?
Noice. I was really disappoint with NeRF, the hype was there but when I tried it out it was never anything close to the demo's presented. I'll have to give this a try. Question though, how capable is this to do things like measure features or export 3d assets? Obviously some scaling or reference lengths would be supplied.
If your goal is measuring surfaces or 3D Geometry in general, I suggest Neuralangelo.
This is great and all but not for closeups, this would be beneficial if a 3D program can recreate the landscape with it's own assets and textures for fast level productions.
SAO really be coming to life
Boom!
Crazy the cable detail, does that mesh?
This implementation does not mesh. Stay tuned for my experiments with Nueralangelo. That meshes!
Ah yeah I saw they released that now, too many toys to play with :) @@thenerfguru
What hardware was used for your training? Is this within the realm of possibility with a DJI Mini 3 Pro and 5950x CPU/RTX 4090 GPU?
That's amazing. Is it possible to import into unreal engine?
Not currently
Cool! But how is it different from NeRFs?
That warrants it's own video. However, in a sentence, 3DGS generates a scene similar to a dense point cloud (but the points are splats) that can easily be rendered by your GPU. So you get really good visuals that run at 100 fps in real-time.
Are you going to make a tutorial on this? It’s so awesome
This tutorial?
Getting Started With 3D Gaussian Splats for Windows (Beginner Tutorial)
ua-cam.com/video/UXtuigy_wYc/v-deo.html
Imagine games and vr with this
Oh yes! This is also a fast way to world-build (at least when you want a duplicate of a real environment). Google Earth VR would be interesting using this technology. They are already diving into NeRFs.
But can you export the 3D model in a standard format?
Not with this code implementation. It’s possible though
Could you then pass an image recognition program to identify cracks on the antennas?
Can this be the future for street view for google maps?
I wouldn’t doubt if this tech is part of the new immersive view
What´s the difference between this and photogrametry?
this works only for static objects? you cant have moving objects or even dynamic lights, its just a 3d Photo at the end
Is this software for viewing 3DGS models open source? Can you provide it?
Impressive!
Thank you!
Could build an entire game using real life scenes....
Why not?!
do I need an Nvidia graphics card to use this gaussian splatting or integrated graphics is enough
I’m interested if you can give this thing a collision mesh, because from there you can make an FPS that people can add maps to just from images.
Not sure. Maybe in UE5. Or, you can have an invisible mesh layer behind the data.
How much storage space does this interactive scene take up?
more please🤗
Definitely posting more thorough content soon. I dove deep into the paper last night trying to wrap my head around what we’re actually viewing.
do the drones also capture depth data? If not, how does this system know where in 3d space to put each "splat"?
Structure from motion...also it doesn't have to accurately place splats. It just needs to mimic the appearance of accuracy.
Sorry for a newbie question, I watched this video for the math.
What happens if the guy by the van moves? If I understand it correctly, the 3D scene is reconstructed from a set of photo images? How do you deal with spatial changes between frames, like moving objects or lighting changes?
Just a guess, but there could be artifacts due to the motion. That's what happened when I was making panoramic shots out of dozens of photographs.
The scene (set of gaussians with their attributes) is trained using a backpropagation algorithm, just like how neural networks are trained so im guessing if two different images show differences, it will sort of blur them together
@@Nik-dz1yc Correct. You end up with smears or ghosts in the data. If the person is static for several images and then moves, you may have a clear person and a ghost.
soo trippy!!
Can you provide the footage of this demo?
very good
This looks great. How can we contact you? Email or Insta or Twitter?
Nice meeting with you guys!
wonder if google could use this for maps in some areas
I'm curious as to whether you had permission from the tower operator/owner to fly in such proximity to it. In my previous job, I was the manager of radio services for Air Traffic Control and they were often run off towers like this. Your drone being so close would potentially have interfered with our services, it would have potentially also been illegal to fly there.
Yes, this was a test site we had clearance at. I wouldn't in my right mind fly around any random tower.
any tutorial or GUI implmementation to train this?
Hoping to post it tonight
@@thenerfguru that would be neat, im having issues installing this on win10
Dynamic environment possible?
How dynamic is it.
When game developrs will start using this?
Walkthrough of This to unreal or blender please..
Currently, 3D gaussian splatting is not supported in either UE or Blender. I'll keep everyone posted if it does.
@@thenerfguru coz its so clean n no space/object ghosting like NErf, even with baked textures out for temp use this would be better. It does create mesh right? Hv so many questions but excited
@@visualstoryteller6158 There are no meshes in this scene. You are looking at hundreds of thousands of overlapping 3D Gaussian Splats. You can still get ghosting/floaters with this technique. That is usually do to how you capture and lighting conditions. Less to do with the technology itself.
I hope google will fly swarms of drones to scan the entire planet for google earth.
Haha! That would be cool.
No that would be invasive
street view
Have you played the newest Flight Simulator game? It’s getting pretty close.
@@dyhnen8977They already do this with satellites, the only difference is that you will be able to see the drones, while the satellites cannot
Does it work with dynamic lighting?
This specific project does not. I bet someone could write a viewer that does.
This could be awesome for Google maps
I assure you Google is already ahead of the curve on this technology! They are behind a lot more radiance field research than you realize.
I want Starcraft with 3D Gaussisn Splatting now
Splatcraft
With how empty atoms are they might as well be splats
Tutorial?
Getting Started With 3D Gaussian Splats for Windows (Beginner Tutorial)
ua-cam.com/video/UXtuigy_wYc/v-deo.html
Who can I pay to turn footage or photos into NeRFs?
I mean, if photogrammetry used to be more accurate than LiDAR on non tree-penetrating surveys, imagine this method getting the tools for measurement. 😂
I can imagine it! We'll get there.
How to do it ??
this could be a great feature for construction bidders doing site visits virtually
Excellent modelling! Do you use a 3D mouse or a Space mouse for better movement around the image?
Can anyone remember Euclideon Infinite Detail engine from ages back? Pretty sure they were australian, or the main geezer was. This seems like an evolution of that general idea. Is it? Lol.
Wasn't that a hoax?
@@thenerfguru no. It became a legit product... udStream or something. It was massively overblown and no good for games because of the lighting difficulties. Plus at the time they couldnt reorient objects and stuff. But that was ages ago. It was called Unlimited Detail not Infinite Detail. My bad.
How big is the final file?
This specific one is around 1GB. Most projects I have done are between .5 and 2 GB.