I wonder if this tool can be used for police work? if the police manage to capture multiple surveillance photos, they could also composite them with scenes after, the deep thinking could then evaluate how things have changed, highlighting changes and show clues to detectives.
Dear Author , please load 1080p ! THis is great and informative video explain me a lot regarding NeRF settings but it's make me sad to watch it in 720p .
hey i need help i was getting python: can't open file 'F:\Tutorial gp\instant-ngp\scripts ender.py': [Errno 2] No such file or directory and i checked under scripts and render.py is not there is that way ?
You have 2 options: use bycloudai’s render.py script or use run.py ByCloud’s GitHub fork: github.com/bycloudai/instant-ngp-Windows Or, you can use run.py which is in our advanced tips video at the 1 hour mark: ua-cam.com/video/_xUlxTeEgoM/v-deo.html
first created the fox nerf but after that when i used my own images and gave it colmap commad it doesnt gives me a transform json file what should i do it says D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
@@EveryPoint i figured it out the problem was i used a video which was converted to image sequence bht the video was captured by a phone in 1080 due to my smartphone was forcing image stabilization some alit of images has some amount of motion blur and i was using colmao exhaustive matching which crashes sometime and its not good with image sequences another creator suggested me to use colmap sequential matching which works good and the final nerf was really good and clean with very less noise
Some thoughts about making it a real time and a good quality system: 1. For estimating camera positions you could use the loftr transformer from Kornia library (and not colmap) for key points detection and matching, I think it’s much faster 2. For smooth mesh maybe neural tsdf can do the trick if you aren’t using it yet;) 3. It could be great if you add normals estimation for the reconstructed 3d coordinates Good job!
@Elior ... With your knowledge on that topic would it be theoretically possible to realtime-render this in vr or is this something out of scope in terms hardware-requirements or/and on how the rendering-engine works?
@@fraenkfurt With today’s methods near real time could be achievable, maybe 0.1 fps (each scene is a “frame” in this context) and faster in end to end product. Hardware limitations are crucial for sure. Recently I read a paper which called “Tensorf - Tensorial radiance field” , they said that a mixture between this and ngf could lead some interesting results. I don’t know exactly about what did you mean with rendering engines since I have worked only with 3d structures and in none real time environment.
@@eliorkalfon191 The fact that you would need to render the scene twice with slight offsets and at a high resolution would mean your hardware would have to be very very high end. Cost prohibitive at this point. The real-time rendering on our RTX 3080 is running at a very low resolution. At 1920x1080 we render 1 frame every 3 seconds.
An extremely well done video, congratulations! Could you please share the photos used for the bridge reconstruction? In case already done, where can I find them? Thank you.
NVIDIA Instant Nerf does not produce a high quality textured mesh yet. It's primary use is for alternative view synthesis. We suggest keeping an eye on advancements as the technology is quickly advancing.
@@EveryPoint I see. Can we export the output we currently get , because some of my scans look great, and i wish i could just export it for use in blender.
@@EveryPoint Does it provide something to work off of? Is it possible at all to create a gltf / glb file with this technique? I'm new to all of this, by the way. Thanks for sharing.
@@techieinside1277 As you probably have noticed by now, the mesh output is not optimal. Currently, traditional photogrammetry will produce a better useable textured mesh model.
Is there a way to take your first dataset and json and compile it with a second one? I.E. string multiple rooms of a house together from separate datasets?
We expect that the scale issue will improve over time. Also, a services could be built on cloud based service where hardware limitations could be overcome. Remember, it’s only a technology that first came out 2 years ago!
Seeing this stuff from start to finish caters to my learning style. Soooo flipping helpful! Thanks for the tutorial! Have you seen Nvidia's 'nvdiffrec' yet? Apparently it's like photogrammetry, but it spits out a model AND a complete PBR material set!
:/ I'm getting 'colmap' is not recognized as an internal or external command, operable program or batch file. FATAL: command failed and I can't figure out why it makes me wanna tear my hair out
i work myself as well as a non coder through all and have the same issues with python 3.9 and python 3.10 (what i use for another somehow important task for me) is there anyway to solve it without removing it ?
If you have build issues, we suggest editing the CMakeCache where 3.10 was used and rebuilding the codebase. We also suggest you can try adding the build folder to your python path in the environments editor. This may solve issues you have.
@@EveryPoint i managed in the end after i deinstall all ... and then install new ... and i started to create some nerfs ... instant NGPs but the results are terrible :( .... i used the same data sets which i used before for photogrammetry ... for example i used 700 pictures of an forest with bridge ... and in photogrammetry it all worked but in nerf ...it looks like mess ??? then i tried other more tiny sets but as well absolute disapointing results ... do i do something wrong ... it looks to me that colmap does all good and then when i start the instant NGPs it is not doing the job propperly ???
Great video thanks for the information! I was wondering if you have had any experience with reflective surfaces, As you know that is usually the Achilles heel in photogrammetry.
Thank you Jonathan for a phenomenal and very effective tutorial. It could still be improved if it was made available in HD or higher resolution. Some of the fonts on the video content appear too small when I watch the video out of full screen.
Could you help me to fix this? thank you ERROR: Not enough GPU memory to match 12924 features. Reduce the maximum number of matches. ERROR: SiftGPU not fully supported
Hi! thank you for the great video. Is there a way to render a cropped scene? Because the entire background jumps back when I render or reopen the scene. I want to render without too many clouds
You have two options: edit the aabb scale in the transforms file. Or, you can hack the run.py script to render video cropped in the GUI. Perhaps this will be a future video.
This stream was really helpful but for some reason my render.py script isn't exist. Also I've downloaded ffmpeg but can't find it destination to add to the path.
@@rikopara Yes! You can create your own render script too. However, bycloudai's version works great. As for ffmpeg, most likely it is here: C:\ffmpeg\bin
Hi! Did you find out how to add the script? I tried copying the one from bycloudai' but it still does not seem to work. I get the error "ModuleNotFoundError: No module named 'pyngp'". I tried installing his version, but only the newly updated version works for my PC.
Thanks for the breakdown, Jonathan! But how does one go about starting the GUI without initiating training for a new data set? I just want to be able to Load the .msgpack from a previously trained project.
Use ./build/testbed --no-gui or python scripts/run.py You can load the saved snapshot with Python bindings load_snapshot / save_snapshot (see scripts/run.py for example usage)
@@EveryPoint Please i am having issues using customs datasets. The rendering is always poor with the customs dataset but okay when i use synthetic dataset from the vanilla nerf
Curious: could you mix in a couple higher definition images to increase the quality? If so, would you have to place different weights to get that better result?
Thanks for the video. Is there any way to rotate th scene? When i try to do it with my mouse it just spins in wrong direction. I tried to align the center but coudnt make it work
No, you would have to modify your transforms. If the whole scene is sideways, sometimes deleting the image metadata and rerunning COLMAP will fix the issue.
@@tasteyfoood rotating the scene in the GUI? Also, what you are seeing in a nerf is not a discrete object, it's a radiance field where every coordinate space in the field has an "object" but it may be transparent.
@@EveryPoint thanks it’s helpful to realize it’s not producing an “object”. I think my issue may have stemmed from trying to rotate a sliced section of the radiance field and being confused that it wasn’t rotating with the sliced section as the center point
They need to implement the ability to render your instant nerf into a 3d rendering software. Something that’s not so gpu intensive. Something that could be modified to a mobile device.
@@EveryPoint what about on Colab? p.s. I am unable to run NERF on my Mac M1. I have around 125 pictures of a nice art piece (4k resolution, 360 degree shots, around 400 MB of total data). I would love to complete this project but I am afraid compatibility might be the bottleneck.
Hi there, I was trying to implement a project using this and was wondering if there was a way to crop (min x,y,z and max x,y,z) without using the gui (using the command line preferably) I am using an RTX 3050 Ti would be a great help to me if u can guide me on how to do it or where to look since as far as I can tell u're the only one who actually helps me understand what's going on Thanks a lot
Hi, How are you doing? I am having problems rendering customs dataset. The result is always poor. Is there a way one can get the image in the box and get a good rendering
@jeffreyeiyike2358 If you're getting bad rendering with only your custom data the problem might be with the custom data provided. so first try rending the sample data provided by the instant ngp in the data folder such as the fox and armadillo. If these data render fine then consider reading their transforms file to try and replicate the parameters preferred for such as scene. Moreover check your input images whether it be frames of a video or plain images and see whether there is any blurry or shaky ones and remove them to improve the quality if the render. It is worth notting as well that if u are using images and not a video with colmap, the images might be shot with an insufficient number of overlapping images which can lead to a loss in detail. From my testing as well, I found that u should avoid direct light as reflections tend to show on the rendered mesh, so a diffused light works best for retaining detail and accurate color and texture of the scene. Hope I was of some help😊
@@jweks8439 I will be happy if I can set up a zoom meeting with you. For the fox and armadillo it works fine. I noticed the bounding box is not on the object. I used videos and not images because if there are no good overlap colmap would fail and not produce images and the transforms.json so I always use videos
Thanks for the straightforward directions. I got the app installed and it worked well, but now it says "This app can't run on your PC." Any ideas? Thanks
@@EveryPoint Tried anaconda and visual studio. Also tried running in admin and I get the same error. I read it could be related to windows secruity/antivirus protection, but no luck when I disable those.
Got it to work after a reinstall. Now I'm running into an issue when running the render.py script. I'm getting - "RuntimeError: Network config "data\into_building\base.msgpack" does not exist. Any ideas?
@@anthonysamaniego4388 did you save a snapshot after training? This is necessary to do prior to training. Saving the snapshot will generate that missing file.
If someone made an iOS app that allows you to upload a bunch of pictures and send it off to a remote server with a GPU, that would be a very popular app.
That would be nice. However, this is still in research phase. Eventually we expect NVIDIA to productize it. In the meantime, check out Luma Lab's beta.
I've had success with this build and have been messing around with using it (here's a test: ua-cam.com/video/JbiCMN2lPAQ/v-deo.html). I had some issues, mostly confounding and inconsistent, but I'll mention them all here in case it helps (I'm pretty new to this stuff, so it might seem obvious to some). I'm using Windows 10, NVIDIA GeForce RTX 2070. I followed bycloudia's Github fork (github.com/bycloudai/instant-ngp-Windows) and video (ua-cam.com/video/kq9xlvz73Rg/v-deo.html). The build went smoothly the first time, but I did have some trouble finding the exact versions of some things. I used Visual Studio 16.11.22 (not 16.11.9) and CUDA 11.7 (not 11.6). I used OpenEXR-1.3.2-cp37-cp37m-win_amd64 (not OpenEXR-1.3.2-cp39-cp39-win_amd64 - this one gave me "Does not work on this platform." I chose different versions until one worked). I'm using Python 3.9.12 (this is what is returned when python --version is used on Anaconda Prompt, but, on Command Prompt, it says 3.9.6 (at one point it said 3.7 - confounding)). Everything went smoothly, and I first tried my own image set of photos shot outwards around a room. When testbed.exe was launched, everything was extremely pixelated. This resolution can be changed by unchecking the Dynamic resolution box and sliding the Fixed resolution slider (the higher the number, the lower the resolution. Things might go really slow, and it will be hard to check the Dynamic resolution box again. It's easier to slide the slider to a higher number, then check that box). My image set, though, did not produce anything recognizable as a room. Apparently this works better when looking inward at a subject. I had success with the mounted fox head example. Using the command "python scripts/colmap2nerf.py --colmap_matcher exhaustive --run_colmap --aabb_scale 16 --images data/" creates the transforms.json file. There's some inconsistency from what bycloudai says about the aabb_scale number. He states that a lower number, like 1, would be for people with a better GPU, and 16 with a moderate GPU. But, the NVIDIA folks say "For natural scenes where there is a background visible outside the unit cube, it is necessary to set the parameter aabb_scale in the transforms.json file to a power of 2 integer up to 128, at the outermost scope." For my above youtube example, I used 128 - this looked much better than using 2. This number, though, needs to be changed in the transforms.json text file, because only a number from 1-16 is accepted in the above command. The Camera path tab window is hidden behind the main tab window. Reposition your 3D scene using the mouse and scroll button on mouse, then hit "Add from cam" to create a camera keyframe (after creating a snapshot in the main tab). To play the keyframes, slide the auto play speed to choose the speed of playback, and click the above camera path time slider (so intuitive!). You'll see the playback in the little window. If you click READ, it will playback in the big window, but it seems to mess up the axis of rotation or something (not sure what this READ is, but I don't suggest clicking it!). All was going well, but when I hit esc and tried to render out the video, I had a few problems. First, I hadn't copied the the render.py script from bycloudai into my script folder. Once that was copied, I got an error about the pyngp module not being present (this seems to be a common problem). But, that folder was there. I removed the .dir from that folder, and I didn't get that pyngp error anymore. I got another error (this is where things are inconsistent and confounding again). Completely by mistake I realized I could run the render command in the Developer Command Prompt, but not the Anaconda Command Prompt. Worked perfectly. But...at one point while I had another image set training, everything froze, had to do a hard reboot. When I tried to run testbed.exe again, I got "This PC cannot run this program" Windows popup. After trying several things to get this to run again, I realized the file was 0KB. No other exe files had this problem, and I ran a virus check and everything was clean. I started a new folder, and re-did bycloudai's compile steps. After that, everything worked perfectly, including the rendering out of the video file in the Anaconda Prompt, and keeping the .dir on the pyngp folder (go figure). Hope that helps some folks. Oh, and check out some other AI stuff I've messed with here: ua-cam.com/video/MoOtNMgFOxk/v-deo.html
100% of this makes sense. I believe a lot of the issues you ran into were because instant-NGP has been updated a lot since bycloudai’s fork and this video. Also, you were most likely not always working in the conda environment. I have quite a few updates going live on this channel tomorrow.
@@thenerfguru Cool. Are there tricks to getting a cleaner 3D scene? I’d love to use this to do moves around vehicles like in my test, but the image is a bit fuzzy still. In examples I’ve seen in other videos things are much crisper.
Start with sharp photos and the deepest field of view possible. Also? Keep the scene as evenly lit as possible. Take loops far enough away from the vehicle that you see it all in one shot. Remember that the view in the GUI does not look nearly as sharp.
Another confounding issue - after closing Anaconda Prompt and reopening, when using the command "python scripts/colmap2nerf.py --colmap_matcher exhaustive --run_colmap --aabb_scale 16 --images data/" I'm now getting, out of nowhere "File "scripts/colmap2nerf.py", line 19, in import cv2 ModuleNotFoundError: No module named 'cv2'", And weirdly, the command only works in Developer Command Prompt.
@@EveryPoint Do you know what the various training options are, and how they effect the final outcome? For instance, what is "Random Levels"? I notice when clicked, the loss graph changes drastically (the line gets much higher when clicked). Also, do you know how to read this loss graph? I know there's a point of diminishing returns - is this what this graph indicates, and is it when the line is high, or low (much of the time I'm seeing the line spiking up and down, completely filling the vertical space). Is there a number of Steps that, on average, should be achieved? I've let it run all night and gotten around a million steps, I'm not sure if the result was any better than a much lower number (and, I have a 2070 - I'm not sure if the 3090 gets to this number in a ridiculously shorter time period).
I really appreciate you sharing your insights and experience with this tool. Thanks, and I look forward to more
Thanks, Alan!
I wonder if this tool can be used for police work? if the police manage to capture multiple surveillance photos, they could also composite them with scenes after, the deep thinking could then evaluate how things have changed, highlighting changes and show clues to detectives.
Dear Author , please load 1080p ! THis is great and informative video explain me a lot regarding NeRF settings but it's make me sad to watch it in 720p .
Unfortunately the livestream was recorded in 720p. That was our mistake! We will have additional content soon at 1080p resolution.
@@EveryPoint Try to use some level of super resolution models in AI and see if that works
This video is amazing. I just found out about you last night and already watched all your videos. Your hilarious.
Thank you!
This is simply amazing. Thank you A LOT.
Glad it was helpful!
hey i need help i was getting python: can't open file 'F:\Tutorial
gp\instant-ngp\scripts
ender.py': [Errno 2] No such file or directory
and i checked under scripts and render.py is not there is that way ?
You have 2 options: use bycloudai’s render.py script or use run.py
ByCloud’s GitHub fork: github.com/bycloudai/instant-ngp-Windows
Or, you can use run.py which is in our advanced tips video at the 1 hour mark: ua-cam.com/video/_xUlxTeEgoM/v-deo.html
first created the fox nerf but after that when i used my own images and gave it colmap commad it doesnt gives me a transform json file what should i do it says
D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
That is a new one for us. Are you using RAW files or HDR video?
@@EveryPoint i figured it out the problem was i used a video which was converted to image sequence bht the video was captured by a phone in 1080 due to my smartphone was forcing image stabilization some alit of images has some amount of motion blur and i was using colmao exhaustive matching which crashes sometime and its not good with image sequences another creator suggested me to use colmap sequential matching which works good and the final nerf was really good and clean with very less noise
Subscribed - wish you made more videos, they are valuable and educative!
Great ... I have a question.. is possible to export a sort of point cloud? Would be great.
Not currently possible
@@EveryPoint thanks
Some thoughts about making it a real time and a good quality system:
1. For estimating camera positions you could use the loftr transformer from Kornia library (and not colmap) for key points detection and matching, I think it’s much faster
2. For smooth mesh maybe neural tsdf can do the trick if you aren’t using it yet;)
3. It could be great if you add normals estimation for the reconstructed 3d coordinates
Good job!
Perhaps the NVIDIA AI team is reading these comments!
@Elior ... With your knowledge on that topic would it be theoretically possible to realtime-render this in vr or is this something out of scope in terms hardware-requirements or/and on how the rendering-engine works?
@@fraenkfurt With today’s methods near real time could be achievable, maybe 0.1 fps (each scene is a “frame” in this context) and faster in end to end product. Hardware limitations are crucial for sure. Recently I read a paper which called “Tensorf - Tensorial radiance field” , they said that a mixture between this and ngf could lead some interesting results. I don’t know exactly about what did you mean with rendering engines since I have worked only with 3d structures and in none real time environment.
@@eliorkalfon191 The fact that you would need to render the scene twice with slight offsets and at a high resolution would mean your hardware would have to be very very high end. Cost prohibitive at this point. The real-time rendering on our RTX 3080 is running at a very low resolution. At 1920x1080 we render 1 frame every 3 seconds.
Does using more images improve the final quality? Or at some point it doesn't matter anymore?
An extremely well done video, congratulations!
Could you please share the photos used for the bridge reconstruction? In case already done, where can I find them?
Thank you.
Can you export the file and take measurements on the 3D model?
You can export a mesh, however, it is lower quality than you would produce with traditional photogrammetry.
@@EveryPoint There goes the Unreal Engine Nanite dreams with this tech!
So like I can’t really use the mesh obj model?
Great video mate! I was wondering how do you go about exporting the model + texture so as to use it with blender?
NVIDIA Instant Nerf does not produce a high quality textured mesh yet. It's primary use is for alternative view synthesis. We suggest keeping an eye on advancements as the technology is quickly advancing.
@@EveryPoint I see. Can we export the output we currently get , because some of my scans look great, and i wish i could just export it for use in blender.
I have ngp setup and it's working great so far.
@@EveryPoint Does it provide something to work off of? Is it possible at all to create a gltf / glb file with this technique?
I'm new to all of this, by the way. Thanks for sharing.
@@techieinside1277 As you probably have noticed by now, the mesh output is not optimal. Currently, traditional photogrammetry will produce a better useable textured mesh model.
Is there a way to take your first dataset and json and compile it with a second one? I.E. string multiple rooms of a house together from separate datasets?
Technically, you could do something like this. The limitation would be the total VRAM this project would take to run.
@@EveryPoint Right, after posting the question I came to find how limited in scale you can get. Thanks for the amazing tutorial & response.
We expect that the scale issue will improve over time. Also, a services could be built on cloud based service where hardware limitations could be overcome.
Remember, it’s only a technology that first came out 2 years ago!
Seeing this stuff from start to finish caters to my learning style. Soooo flipping helpful! Thanks for the tutorial! Have you seen Nvidia's 'nvdiffrec' yet? Apparently it's like photogrammetry, but it spits out a model AND a complete PBR material set!
Yes, it uses neural networks to compute SDF and materials as separate flows into a solid model.
is it "easy" to install/run it? what are the input data? also a video or just a single image?
the automatic bezier curves on the camera paths... THANK you
One reason we keep using Instant NeRF! The camera path tools are handy!
I wonder when some sort of documentation will appear?
There is quite a bit of documentation on the GitHub Page
Please i am having issues with the custom dataset. The rendering is poor.
Has there been an update to iNERFs?
There are updates just about weekly.
:/ I'm getting
'colmap' is not recognized as an internal or external command,
operable program or batch file.
FATAL: command failed
and I can't figure out why it makes me wanna tear my hair out
Our apologies for the late reply. COLMAP needs to be added to your PATH, assuming it has been installed.
Awesome !
Is there a way to save RGB and depth and tother textures from a view ?
i work myself as well as a non coder through all and have the same issues with python 3.9 and python 3.10 (what i use for another somehow important task for me) is there anyway to solve it without removing it ?
If you have build issues, we suggest editing the CMakeCache where 3.10 was used and rebuilding the codebase.
We also suggest you can try adding the build folder to your python path in the environments editor. This may solve issues you have.
@@EveryPoint i managed in the end after i deinstall all ... and then install new ... and i started to create some nerfs ... instant NGPs but the results are terrible :( .... i used the same data sets which i used before for photogrammetry ... for example i used 700 pictures of an forest with bridge ... and in photogrammetry it all worked but in nerf ...it looks like mess ??? then i tried other more tiny sets but as well absolute disapointing results ... do i do something wrong ... it looks to me that colmap does all good and then when i start the instant NGPs it is not doing the job propperly ???
Great video thanks for the information! I was wondering if you have had any experience with reflective surfaces, As you know that is usually the Achilles heel in photogrammetry.
They are also an Achilles heal for NeRFs. It creates a parallel world inside of the mirror.
@@EveryPoint That is actually interesting to know
Can you help me please to make a decision between using Nvidia Instant NeRFs and Meshroom from AliceVision
Depends on what you need as the results. If you need meshes and good surface data, then Meshroom is ideal. Instant NGP produces images.
Wonder if you can convert this to usable poly mesh
Look at nvdriffrec if you want to do that.
Thank you Jonathan for a phenomenal and very effective tutorial. It could still be improved if it was made available in HD or higher resolution. Some of the fonts on the video content appear too small when I watch the video out of full screen.
Good day, I am having issues putting the object inside the unit box. What are the parameters am i suppose to change?
Could you help me to fix this? thank you
ERROR: Not enough GPU memory to match 12924 features. Reduce the maximum number of matches.
ERROR: SiftGPU not fully supported
This is an issue with COLMAP. Did you install and/or compile the version with GPU support?
Hi! thank you for the great video. Is there a way to render a cropped scene? Because the entire background jumps back when I render or reopen the scene. I want to render without too many clouds
You have two options: edit the aabb scale in the transforms file. Or, you can hack the run.py script to render video cropped in the GUI. Perhaps this will be a future video.
@@EveryPoint That would be cool, because I changed the scale in transform.json, but the crop resets to 16 when opening the scene or rendering.
I have a problem "'colmap' is not recognized as an internal or external command,"
Can somebody know what is going on?
You may need to install and add it to path.
This stream was really helpful but for some reason my render.py script isn't exist. Also I've downloaded ffmpeg but can't find it destination to add to the path.
Oh, looks like i've solved it. Render.py was only in bycloudai's fork.
@@rikopara Yes! You can create your own render script too. However, bycloudai's version works great. As for ffmpeg, most likely it is here: C:\ffmpeg\bin
Hi! Did you find out how to add the script? I tried copying the one from bycloudai' but it still does not seem to work. I get the error "ModuleNotFoundError: No module named 'pyngp'". I tried installing his version, but only the newly updated version works for my PC.
@@svenbenard5000 Did you copy whole fork or just render.py file? Using of newest build with bycloudai's render.py file works for me.
@@svenbenard5000 Also check for "pyngp" files is /instant-ngp/build dir. If there's no any you probably skipped some installation steps
Thanks for the breakdown, Jonathan! But how does one go about starting the GUI without initiating training for a new data set? I just want to be able to Load the .msgpack from a previously trained project.
Use ./build/testbed --no-gui or python scripts/run.py
You can load the saved snapshot with Python bindings load_snapshot / save_snapshot (see scripts/run.py for example usage)
@@EveryPoint Please i am having issues using customs datasets. The rendering is always poor with the customs dataset but okay when i use synthetic dataset from the vanilla nerf
It worked.....Thank you soo much
Great to hear!
I am getting Memory Clear Error. i havE RTX3080, i used 170 Photos (Nikon). I will try with lower resolution images tonight. i hope it works
Most likely you used too many high resolution imagery. NeRF is quite VRAM heavy. Try reducing the pixel count by half.
@@EveryPoint Thanks for the advice. I dropepd the picture count to 80 and it worked like a charm. Thank you again 🙏
Good to hear!
how to save 3D model or point clouds?
You can save a mesh using marching cubes. However, the quality of the mesh lower than traditional photogrammetry.
Do you know if it's possible to run a multi gpu set up ?
Great video btw!
Currently, no it does not.
Curious: could you mix in a couple higher definition images to increase the quality? If so, would you have to place different weights to get that better result?
Thanks for the video. Is there any way to rotate th scene? When i try to do it with my mouse it just spins in wrong direction. I tried to align the center but coudnt make it work
No, you would have to modify your transforms. If the whole scene is sideways, sometimes deleting the image metadata and rerunning COLMAP will fix the issue.
@@EveryPoint thanks 🙏
@@EveryPoint what's the functional reasoning behind the lack of a rotate? It's a 3D object right? I feel like I'm missing something..
@@tasteyfoood rotating the scene in the GUI? Also, what you are seeing in a nerf is not a discrete object, it's a radiance field where every coordinate space in the field has an "object" but it may be transparent.
@@EveryPoint thanks it’s helpful to realize it’s not producing an “object”. I think my issue may have stemmed from trying to rotate a sliced section of the radiance field and being confused that it wasn’t rotating with the sliced section as the center point
Sorry if you said it in the video but, you can download that 3d? Like obj or mtl
A very poor quality one. This is not the NeRF you’re looking for.
@@EveryPoint please is there another nerf implementation that produces good quality 3d in real-time((or close to)?
They need to implement the ability to render your instant nerf into a 3d rendering software. Something that’s not so gpu intensive. Something that could be modified to a mobile device.
I suggest you look into Gaussian Splatting.
Hi, is there any chance to export data that we obtained at .obj format ?
I wish NeRF should be default in After Effects, Houdini, Unity and Unreal...definitely a revolution for XR!
We imagine it becoming part of the NVIDIA Omniverse
I Use Nvidia RTX 2060 Super. 32gig ram and AMD Ryzen 7 3800X 8-Core Processor. Will it be able to handle it?
Yes! Your limit will be the VRAM on the 2060. Keep your input image resolution to 1920x1080
Are these instructions still relevant? Just curious if you still need all this. I downloaded the instant NGP.
one question, is the 1080ti gpu still compatible with the nerf ai technology? or do i need to have a RTX series gpu?
1080 ti works, however, training and rendering times will be lengthy. NVDIA suggest 20XX or greater.
Is this possible on a Mac M1?
No, this is only supported with PC and Linux machine with a NVIDIA GPU
@@EveryPoint what about on Colab? p.s. I am unable to run NERF on my Mac M1. I have around 125 pictures of a nice art piece (4k resolution, 360 degree shots, around 400 MB of total data). I would love to complete this project but I am afraid compatibility might be the bottleneck.
Hi there, I was trying to implement a project using this and was wondering if there was a way to crop (min x,y,z and max x,y,z) without using the gui (using the command line preferably)
I am using an RTX 3050 Ti
would be a great help to me if u can guide me on how to do it or where to look since as far as I can tell u're the only one who actually helps me understand what's going on
Thanks a lot
Hi, How are you doing? I am having problems rendering customs dataset. The result is always poor. Is there a way one can get the image in the box and get a good rendering
@@jeffreyeiyike122 try adjusting the aabb, the optimal value differs from scene to scene
@@jweks8439 I have tried adjusting the aabb between 1 and 128 but the rendering and psnr isn’t improving.
@jeffreyeiyike2358 If you're getting bad rendering with only your custom data the problem might be with the custom data provided. so first try rending the sample data provided by the instant ngp in the data folder such as the fox and armadillo. If these data render fine then consider reading their transforms file to try and replicate the parameters preferred for such as scene. Moreover check your input images whether it be frames of a video or plain images and see whether there is any blurry or shaky ones and remove them to improve the quality if the render. It is worth notting as well that if u are using images and not a video with colmap, the images might be shot with an insufficient number of overlapping images which can lead to a loss in detail. From my testing as well, I found that u should avoid direct light as reflections tend to show on the rendered mesh, so a diffused light works best for retaining detail and accurate color and texture of the scene.
Hope I was of some help😊
@@jweks8439 I will be happy if I can set up a zoom meeting with you. For the fox and armadillo it works fine. I noticed the bounding box is not on the object. I used videos and not images because if there are no good overlap colmap would fail and not produce images and the transforms.json so I always use videos
Thanks for the straightforward directions. I got the app installed and it worked well, but now it says "This app can't run on your PC." Any ideas? Thanks
How are you launching the app? You should be launching it via anaconda. Perhaps try running in admin mode.
@@EveryPoint Tried anaconda and visual studio. Also tried running in admin and I get the same error. I read it could be related to windows secruity/antivirus protection, but no luck when I disable those.
Got it to work after a reinstall. Now I'm running into an issue when running the render.py script. I'm getting - "RuntimeError: Network config "data\into_building\base.msgpack" does not exist. Any ideas?
@@anthonysamaniego4388 did you save a snapshot after training? This is necessary to do prior to training. Saving the snapshot will generate that missing file.
@@EveryPoint That was it! Thank you!!!
can i do the same on colab
We are not sure if there is a collab version of Instant NGP yet. There is a collab version of Nerfstudio though that can run instant-ngp.
@@EveryPoint thank you sir I will try it
Thanks for sharing.
You're welcome!
Can I run this in Google collab??
Yes.
Do we need to learn coding
They should have easy online demos to use in allot of these kinds of things
Instant-NGP is not productized yet which is why there is a lack of a installer and full tutorials.
It's enough to make your first render in 6-8 hours even with entry skills.
Setting up all the stuff consumes time but rather rewarding.
If someone made an iOS app that allows you to upload a bunch of pictures and send it off to a remote server with a GPU, that would be a very popular app.
kiri engine app
Nice work! Although very expensive
Expensive hardware is needed for sure. However, that is the truth for photogrammetry and 3D modeling as well.
@@EveryPoint Thanks for your comments. Looking forward to see more amazing stuff from your channel.
Really work
thx for soft mate
You're welcome!
lmao i was tNice tutorialnking ths sa tNice tutorialng then i saw ur comnt
damn
да сделайте вы нормальный человеческий интерфейс чтоб любой мог воспользоваться этой программой
That would be nice. However, this is still in research phase. Eventually we expect NVIDIA to productize it. In the meantime, check out Luma Lab's beta.
I've had success with this build and have been messing around with using it (here's a test: ua-cam.com/video/JbiCMN2lPAQ/v-deo.html). I had some issues, mostly confounding and inconsistent, but I'll mention them all here in case it helps (I'm pretty new to this stuff, so it might seem obvious to some).
I'm using Windows 10, NVIDIA GeForce RTX 2070.
I followed bycloudia's Github fork (github.com/bycloudai/instant-ngp-Windows) and video (ua-cam.com/video/kq9xlvz73Rg/v-deo.html). The build went smoothly the first time, but I did have some trouble finding the exact versions of some things.
I used Visual Studio 16.11.22 (not 16.11.9) and CUDA 11.7 (not 11.6). I used OpenEXR-1.3.2-cp37-cp37m-win_amd64 (not OpenEXR-1.3.2-cp39-cp39-win_amd64 - this one gave me "Does not work on this platform." I chose different versions until one worked).
I'm using Python 3.9.12 (this is what is returned when python --version is used on Anaconda Prompt, but, on Command Prompt, it says 3.9.6 (at one point it said 3.7 - confounding)).
Everything went smoothly, and I first tried my own image set of photos shot outwards around a room. When testbed.exe was launched, everything was extremely pixelated. This resolution can be changed by unchecking the Dynamic resolution box and sliding the Fixed resolution slider (the higher the number, the lower the resolution. Things might go really slow, and it will be hard to check the Dynamic resolution box again. It's easier to slide the slider to a higher number, then check that box).
My image set, though, did not produce anything recognizable as a room. Apparently this works better when looking inward at a subject. I had success with the mounted fox head example.
Using the command "python scripts/colmap2nerf.py --colmap_matcher exhaustive --run_colmap --aabb_scale 16 --images data/" creates the transforms.json file. There's some inconsistency from what bycloudai says about the aabb_scale number. He states that a lower number, like 1, would be for people with a better GPU, and 16 with a moderate GPU. But, the NVIDIA folks say "For natural scenes where there is a background visible outside the unit cube, it is necessary to set the parameter aabb_scale in the transforms.json file to a power of 2 integer up to 128, at the outermost scope." For my above youtube example, I used 128 - this looked much better than using 2. This number, though, needs to be changed in the transforms.json text file, because only a number from 1-16 is accepted in the above command.
The Camera path tab window is hidden behind the main tab window. Reposition your 3D scene using the mouse and scroll button on mouse, then hit "Add from cam" to create a camera keyframe (after creating a snapshot in the main tab). To play the keyframes, slide the auto play speed to choose the speed of playback, and click the above camera path time slider (so intuitive!). You'll see the playback in the little window. If you click READ, it will playback in the big window, but it seems to mess up the axis of rotation or something (not sure what this READ is, but I don't suggest clicking it!).
All was going well, but when I hit esc and tried to render out the video, I had a few problems. First, I hadn't copied the the render.py script from bycloudai into my script folder. Once that was copied, I got an error about the pyngp module not being present (this seems to be a common problem). But, that folder was there. I removed the .dir from that folder, and I didn't get that pyngp error anymore. I got another error (this is where things are inconsistent and confounding again). Completely by mistake I realized I could run the render command in the Developer Command Prompt, but not the Anaconda Command Prompt. Worked perfectly. But...at one point while I had another image set training, everything froze, had to do a hard reboot. When I tried to run testbed.exe again, I got "This PC cannot run this program" Windows popup. After trying several things to get this to run again, I realized the file was 0KB. No other exe files had this problem, and I ran a virus check and everything was clean.
I started a new folder, and re-did bycloudai's compile steps. After that, everything worked perfectly, including the rendering out of the video file in the Anaconda Prompt, and keeping the .dir on the pyngp folder (go figure). Hope that helps some folks.
Oh, and check out some other AI stuff I've messed with here: ua-cam.com/video/MoOtNMgFOxk/v-deo.html
100% of this makes sense. I believe a lot of the issues you ran into were because instant-NGP has been updated a lot since bycloudai’s fork and this video. Also, you were most likely not always working in the conda environment. I have quite a few updates going live on this channel tomorrow.
@@thenerfguru Cool. Are there tricks to getting a cleaner 3D scene? I’d love to use this to do moves around vehicles like in my test, but the image is a bit fuzzy still. In examples I’ve seen in other videos things are much crisper.
Start with sharp photos and the deepest field of view possible. Also? Keep the scene as evenly lit as possible. Take loops far enough away from the vehicle that you see it all in one shot. Remember that the view in the GUI does not look nearly as sharp.
Another confounding issue - after closing Anaconda Prompt and reopening, when using the command "python scripts/colmap2nerf.py --colmap_matcher exhaustive --run_colmap --aabb_scale 16 --images data/" I'm now getting, out of nowhere "File "scripts/colmap2nerf.py", line 19, in
import cv2
ModuleNotFoundError: No module named 'cv2'",
And weirdly, the command only works in Developer Command Prompt.
@@EveryPoint Do you know what the various training options are, and how they effect the final outcome? For instance, what is "Random Levels"? I notice when clicked, the loss graph changes drastically (the line gets much higher when clicked). Also, do you know how to read this loss graph? I know there's a point of diminishing returns - is this what this graph indicates, and is it when the line is high, or low (much of the time I'm seeing the line spiking up and down, completely filling the vertical space). Is there a number of Steps that, on average, should be achieved? I've let it run all night and gotten around a million steps, I'm not sure if the result was any better than a much lower number (and, I have a 2070 - I'm not sure if the 3090 gets to this number in a ridiculously shorter time period).