When trying to render I keep getting the message: DLL Load Failed while importing pyngp: the specified module could not be found. Would you know how to fix this?
Instant-NGP is not specifically designed to output photogrammetry quality meshes. We recommend you look into nvdiffrec for neural rendering based 3D object output: github.com/NVlabs/nvdiffrec
Does this render uses less VRAM than training? I'm planning to train a really big scene via ssh on a GPU cluster (~90GB VRAM), but then I dont know how to visualize the results. Any help would be greatly appreciated.
The installation video uses the render.py file that was updated in this video. If you are still rendering animations with render.py, we suggest updating the ffmpeg code. If you follow the advanced tips video, we use run.py to render videos. That python script does not have the frame rate export issues.
I would be curious if anyone's got a solution for taking the nerf and putting it online for sharing with others. like the nerf version of Sketchfab (I suppose you could also just generate a mesh from the nerf with the color data baked in. Whats the process for that as well?)
Yes I’ve been trying to modify the render script to output EXR instead of jpg but there are too many different errors and I can’t overcome. It would be awesome if we could make a version of the render script that would render an EXR of the beauty/depth/world position all with alpha channels.
if I have a camera movment in blender (FBX) is it possible to mix NERF and a external camera movement? if so it can be crazyyyy
When trying to render I keep getting the message: DLL Load Failed while importing pyngp: the specified module could not be found. Would you know how to fix this?
Thanks so much! this was super helpful and easy to follow
Great to hear!
Btw , is there any improvement in getting proper 3D Mesh from this format. Some other methods have option to export but didnt get a proper mesh?
Instant-NGP is not specifically designed to output photogrammetry quality meshes. We recommend you look into nvdiffrec for neural rendering based 3D object output: github.com/NVlabs/nvdiffrec
Does this render uses less VRAM than training? I'm planning to train a really big scene via ssh on a GPU cluster (~90GB VRAM), but then I dont know how to visualize the results. Any help would be greatly appreciated.
Amazing 🤩, does any of the fixes in this video impact previous 'advanced techniques' and 'installation' video ?
The installation video uses the render.py file that was updated in this video. If you are still rendering animations with render.py, we suggest updating the ffmpeg code. If you follow the advanced tips video, we use run.py to render videos. That python script does not have the frame rate export issues.
Hi! Do you have a video about how do you take the photos on location? Thanks
hello
can you show us how this work in video games ?
Do you now if there is a interactable nerf webplayer or AR player available?
I would be curious if anyone's got a solution for taking the nerf and putting it online for sharing with others. like the nerf version of Sketchfab (I suppose you could also just generate a mesh from the nerf with the color data baked in. Whats the process for that as well?)
We suggest looking into Luma AI for this.
very cool! thank you for this. Do you know how to render transparent background instead of matted black?That would be so useful.
It is possible. I believe you can control the alpha channel of the GUI. However, you may need to modify your render script as well.
Yes I’ve been trying to modify the render script to output EXR instead of jpg but there are too many different errors and I can’t overcome. It would be awesome if we could make a version of the render script that would render an EXR of the beauty/depth/world position all with alpha channels.