i can imagine rendering an entire dig site using this. i'm just so glad that NeRF has moved forward so much that we don't need photogrammetry much and do render a whole forgotten city with the least amount of invasion on the site. And we can zoom in on places without compromise as well. But that being said, because accuracy is an issue, i don't know how far this zoom can be depended upon. But i truly hope they bring this work into Project Amarna.
I’m using nerf and I can’t help but see a parallel between it and what they were using in the movie Deja-vu with Denzel Washington.. and also minority report.. but also viewing memories with the pensive in Harry Potter
Google earth/maps isn't using photogrammetry for their 3D models. And they aren't doing the MS flight sim by synthesizing models. Google is purchasing radar data. And there even is publicly avaible SAR(synthetic aperture radar) data of Norway for example. 30cm resolution as well.
maybe we could use generation techniques like diffusion and let the ai guess what is missing in the scene so it would be possible to produce the same results with less image input and less training time
Waymo (owned by Google) already used Nerf on millions of photos from it's self-driving taxis. The result is essentially Google Street view in 3d that you can move around in just like a game. It would be nice if Google made a competitor to Microsoft Flight SImulator, but with cars that you can drive, NPCs walking around and ships and planes. Essentially GTA on a global map (with teleportation ;). Maybe also use AI to rebuild stuff with materials and assets from Unreal Engine 5 (or 6....).
You mentioned at 4:13 that you "highly speculate Google Maps relies heavily on manual editing on those 3D rendering because there is no way a typical method for photogrammetry of this massive scale would be this accurate and it also looks like game texture". I work professionally with game development, texturing, photogrammetry, aerial footage, 3D modeling,... and I'm absolutely certain based on experience that they are not manually editing anything, as that would take too much time by a factor of a 1000 (if not much more). Instead, this can only be an in house photogrammetry based workflow solution tailored to this specific purpose.
Kinda strange that they used nerf trained on Google maps. Because the big advantage of nerf is that it can store complex lighting interaction. But there is no in the Google maps rendering and therefore in the end result. Kinda if you would train gpt4 with cleverbot dialogue.
While all this AI stuff is needed, we still need more detailed raw data to actually make a digital earth twin that is accurate. Self-driving taxis will provide a lot of data, but there is no way around swarms of small drones that scan every nook and cranny of the planet.
Love your channel man! Keep doing what your doing
i can imagine rendering an entire dig site using this. i'm just so glad that NeRF has moved forward so much that we don't need photogrammetry much and do render a whole forgotten city with the least amount of invasion on the site. And we can zoom in on places without compromise as well. But that being said, because accuracy is an issue, i don't know how far this zoom can be depended upon. But i truly hope they bring this work into Project Amarna.
I’m using nerf and I can’t help but see a parallel between it and what they were using in the movie Deja-vu with Denzel Washington.. and also minority report.. but also viewing memories with the pensive in Harry Potter
great video ! love seeing the developments of NeRFs . when talking training times please include on what gpu , otherwise its meaningless .
NGP makes cities look dystopian
Google earth/maps isn't using photogrammetry for their 3D models. And they aren't doing the MS flight sim by synthesizing models.
Google is purchasing radar data. And there even is publicly avaible SAR(synthetic aperture radar) data of Norway for example. 30cm resolution as well.
nice for movie travelling shot
maybe we could use generation techniques like diffusion and let the ai guess what is missing in the scene so it would be possible to produce the same results with less image input and less training time
That is the real solution. In that case it will be possible to do the entire planet in detail with all the photos you can find on Google images.
Waymo (owned by Google) already used Nerf on millions of photos from it's self-driving taxis. The result is essentially Google Street view in 3d that you can move around in just like a game.
It would be nice if Google made a competitor to Microsoft Flight SImulator, but with cars that you can drive, NPCs walking around and ships and planes. Essentially GTA on a global map (with teleportation ;). Maybe also use AI to rebuild stuff with materials and assets from Unreal Engine 5 (or 6....).
No Nerfs at all, this is all upgrades! Thank you for these AI Tech videos 🤓🔥
Why does this guy not have millions of subscribers?!?!? 😢
Will the new version be called Google Erf?
You mentioned at 4:13 that you "highly speculate Google Maps relies heavily on manual editing on those 3D rendering because there is no way a typical method for photogrammetry of this massive scale would be this accurate and it also looks like game texture". I work professionally with game development, texturing, photogrammetry, aerial footage, 3D modeling,... and I'm absolutely certain based on experience that they are not manually editing anything, as that would take too much time by a factor of a 1000 (if not much more). Instead, this can only be an in house photogrammetry based workflow solution tailored to this specific purpose.
Cool! Great content!
The most impressive thing about Bungee nerf is that it has the properties of both gum and rubber.
this will be nice to put on microsoft flight simulator… because my country it looks like generated randomly with buildings even its only houses here…
If it becomes good enough, you could drive cars around in cities! They could add NPCs also. GTA Earth!
I got so dizzy watching this video ...
Let’s call it “Google NeRF”…
Thanks Corridor Crew 😏
Fascinating ❤
I have a question.. is it possible to improve this using SAR satellite data?
Do you have some tutorial to learn do it ?
I wonder if this could be combined with fractal compression ?
Kinda strange that they used nerf trained on Google maps.
Because the big advantage of nerf is that it can store complex lighting interaction. But there is no in the Google maps rendering and therefore in the end result.
Kinda if you would train gpt4 with cleverbot dialogue.
congratulations for the work, I would like to know if it is possible to chat in a file. obj or .fbx (3D mesh) to use in games. thanks
While all this AI stuff is needed, we still need more detailed raw data to actually make a digital earth twin that is accurate. Self-driving taxis will provide a lot of data, but there is no way around swarms of small drones that scan every nook and cranny of the planet.
I have a file not found in project ?
Great video! Small mistake though, Barcelona is in Catalonia...
I LOVE NERF
Next step... capture all the atoms on the planet 🤣
Love the name, I'm pretty sure it was going to be named something else before a certain studio tried playing politics lol.
Welp, I guess Google Maps got NERFED
Well it’s not better then what google earth provides now .. unless everything is rescanned
Not for major cities that Google has already modelled but for other areas, it has the potential to speed up the modeling process.
gonna be? i thought its already is, bcos of quality of old models
subscribed!!!!
I thought this waas going to be a nerf gun video. So disappointed
NeRF your expectations, man.
“It’s nerf or nothing”, unfortunately this video chose nothing
bruh
Ayyy
Yoooo
It's NeRF or nothing