PlenOctrees in 100 lines of PyTorch code | NeRF#6

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ • 7

  • @경영학부배옥환
    @경영학부배옥환 8 місяців тому +1

    Thank you so much for the great video! Do you have any plans to make a 100 lines of PyTorch code video about instant-ngp?

    • @papersin100linesofcode
      @papersin100linesofcode  8 місяців тому

      Thank you for your comment! Yes, this is something I am working one. This will be one of my next videos when the code is ready

    • @경영학부배옥환
      @경영학부배옥환 8 місяців тому

      @@papersin100linesofcode This is truly amazing! Looking forward to a great video

  • @gabby.suwichaya
    @gabby.suwichaya Рік тому +1

    Thank you for this insight implementation. It is because I am very new to this.... Could you please highlight which parts of the implementation that is different from the generic NeRF or which part makes it run in real-time and corresponding to the PlenOctrees ?

    • @papersin100linesofcode
      @papersin100linesofcode  Рік тому +2

      Thank you for your comment! The difference is that the outputs of the model can be efficiently cached in a PlenOctree, which can be queried in real-time, and combined with the ray direction to compute the color in only a few floating point operations.
      I hope that is clear, and that helps :)

  • @Mr_NeRF
    @Mr_NeRF Рік тому +1

    As I understand you entirely skip the plenOctree from your implementation. The only difference in your implementation between the original nerf paper and the one you are presenting here is that the mlp outputs spherical gaussians instead of rgb values? How is this supposed to render faster since you have to evaluate the entire model along the ray each time you want to render a novel view.
    I think you promise here more then you are delivering. Anyways, your implementation of the neural network is still educational and helps to graps the details. Thank you for this.

    • @papersin100linesofcode
      @papersin100linesofcode  Рік тому

      Hi, the code implements Nerf-SH, or more technically Nerf-SG. For real-time rendering, those values can be cached in a plenOctree, or a voxel-based representation. You are right, the last part was not implemented in the video