Capturing Reality With Machine Learning - NeRF 3D Scan Compilation
Вставка
- Опубліковано 29 чер 2022
- Curious About My Creator Journey? Watch: "How I Turned My Passion for 3D into a Career": • How I Turned My Passio... -~-
Photogrammetry is an art form that has been around for decades, but it's never looked better thanks to ML techniques like Neural Radiance Fields (NeRF). This video shows a wide range of 3d captures made using this technique. And I gotta say, NeRF really breathes new life into my old photo scans! All these datasets were posed in COLMAP and trained + rendered with NVIDIA’s free Instant NGP tools.
Learn more about my process here: • NeRF vs Photogrammetry... - Наука та технологія
Amazing stuff!! The clouds and the reflections in the water in the first scene look so good!
Seems amazing results have been taken! 👏
What Hardware system has been used behind the scene, I am wondering??
My Zbook’s Graphic card was not enough (8gb Quadro RTX 4000) for 55 drone images captured by DJI Phantom Pro
CPU is less of a constraint though it can make posing the imagery faster. You definitely need a GPU with a lot of VRAM to hold the nerf in memory - I’d suggest a minimum of 8gb
wow so cool. I've been trying to use PolyCam with LiDAR, with the aim of creating training videos for middle school students using makerspace tools. I feel like the amount of work I'm putting into it does not match with the LiDAR quality. Also, I have an nvidia 3090 so I should be okay. I do love LOVE letting students use the LiDAR on the iPhone, but this video and the previous one have me questioning my next move.
LiDAR on iPhone is good for quick scans of a space if you want to do coarse measurements. Photo mode in polycam is good higher quality geometry. Since you have a 3090 you can also try Reality Capture (now owned by Epic) which is amazing and GPU accelerated. And of course NeRF is amazing if you value visualization quality, have complex subject matter (water, reflective/refractive materials etc).
Any advice on where to find a tutorial on how to accomplish something like this?
Hey! I have an intro video on my channel about instant NGP. If I made a proper tutorial what aspects would you want me to cover?
How easy is it to export those into .obj, .fbx or something like that?
You can def convert these implicit representations into an explicit 3D mesh but the results are a bit hit or miss at the moment - check out my other video about NeRF and Instant NGP