This tutor is great. I have a question about how many epochs you trained. because I run for 10 epocs even though I'm not getting better results as you kept in github. Could you please tell me?
Hi @rohink-VR555, thank you for your nice comment, and question. I trained for one epoch (parameters in the main function). That was several months ago, but if I recall correctly, training more was sometimes harmful (overfitting).
Thank you for your question. The learned NeRF representation is a 3D model of the object. The most commonly used approach to obtain another representation (e.g. mesh) is to do a 3D to 3D conversion using algorithms such as Marching Cubes. Another possible approach, more closely related to what you suggest, is to use the NeRF representation to generate more views -- and potentially depths -- so that they can be fed to an algorithm such as TSDF (truncated signed distance function) Fusion.
Thank you! Yes, definitely possible. I focused on making a simple implementation rather than focusing on speed. Beyond the accelerations from custom CUDA kernels and tiny CUDA, this code could be improved by parallelising the loop that iterates over all resolution.
I have never seen such a simple and fancy implementation of instant-ngp before. Great work!
Thank you! :)
Exactly what im looking for. Thanks friend! Make more of these
Thank you! Will do
Here before this channel blows up ;
Really nice works 🔥
Thank you so much!
This tutor is great. I have a question about how many epochs you trained. because I run for 10 epocs even though I'm not getting better results as you kept in github. Could you please tell me?
Hi @rohink-VR555, thank you for your nice comment, and question. I trained for one epoch (parameters in the main function). That was several months ago, but if I recall correctly, training more was sometimes harmful (overfitting).
@@papersin100linesofcode thanks for response
Really great work!🎉
Would love to see implementation of RL papers and foundational models.
Thank you! This is planned! Should be released in the coming months
hey can you please explain how can we render the images in 'novel_view' in to a 3D object. Does it require photogrammetry?
Thank you for your question.
The learned NeRF representation is a 3D model of the object.
The most commonly used approach to obtain another representation (e.g. mesh) is to do a 3D to 3D conversion using algorithms such as Marching Cubes.
Another possible approach, more closely related to what you suggest, is to use the NeRF representation to generate more views -- and potentially depths -- so that they can be fed to an algorithm such as TSDF (truncated signed distance function) Fusion.
It's possible that on my T4 it takes around 30 minutes per epoch? And not seconds as written in the paper?
Anyway great work, really impressive
Thank you! Yes, definitely possible. I focused on making a simple implementation rather than focusing on speed. Beyond the accelerations from custom CUDA kernels and tiny CUDA, this code could be improved by parallelising the loop that iterates over all resolution.
@@papersin100linesofcode Thanks a lot for the responce. You are a really great and fantastic researcher. Thanks again
Thank you so much
great tutorial. I am also wondering which theme you use in the video btw
Hi @anhtth2207, thank you for your question. Do you mean the sublime text theme? If yes, this is the default theme
How can generate dataset for custom dataset
Hi @jeffreyeiyike2358. For real or synthetic data?
great tutorial brother!
Thank you so much! :)
Thank you very much for this video.
My pleasure :)
Thank you so muchhh!!
Thank you! :)