Thank you for this insight implementation. It is because I am very new to this.... Could you please highlight which parts of the implementation that is different from the generic NeRF or which part makes it run in real-time and corresponding to the PlenOctrees ?
Thank you for your comment! The difference is that the outputs of the model can be efficiently cached in a PlenOctree, which can be queried in real-time, and combined with the ray direction to compute the color in only a few floating point operations. I hope that is clear, and that helps :)
As I understand you entirely skip the plenOctree from your implementation. The only difference in your implementation between the original nerf paper and the one you are presenting here is that the mlp outputs spherical gaussians instead of rgb values? How is this supposed to render faster since you have to evaluate the entire model along the ray each time you want to render a novel view. I think you promise here more then you are delivering. Anyways, your implementation of the neural network is still educational and helps to graps the details. Thank you for this.
Hi, the code implements Nerf-SH, or more technically Nerf-SG. For real-time rendering, those values can be cached in a plenOctree, or a voxel-based representation. You are right, the last part was not implemented in the video
Thank you so much for the great video! Do you have any plans to make a 100 lines of PyTorch code video about instant-ngp?
Thank you for your comment! Yes, this is something I am working one. This will be one of my next videos when the code is ready
@@papersin100linesofcode This is truly amazing! Looking forward to a great video
Thank you for this insight implementation. It is because I am very new to this.... Could you please highlight which parts of the implementation that is different from the generic NeRF or which part makes it run in real-time and corresponding to the PlenOctrees ?
Thank you for your comment! The difference is that the outputs of the model can be efficiently cached in a PlenOctree, which can be queried in real-time, and combined with the ray direction to compute the color in only a few floating point operations.
I hope that is clear, and that helps :)
As I understand you entirely skip the plenOctree from your implementation. The only difference in your implementation between the original nerf paper and the one you are presenting here is that the mlp outputs spherical gaussians instead of rgb values? How is this supposed to render faster since you have to evaluate the entire model along the ray each time you want to render a novel view.
I think you promise here more then you are delivering. Anyways, your implementation of the neural network is still educational and helps to graps the details. Thank you for this.
Hi, the code implements Nerf-SH, or more technically Nerf-SG. For real-time rendering, those values can be cached in a plenOctree, or a voxel-based representation. You are right, the last part was not implemented in the video