Thanks for the video! I am not an expert in this, but the basic intuition is that point clouds are sets so there is no natural way of ordering them i.e. you can NOT identify which is point 0, 1, 2... n, consistently between samples. On the other hand, grid convolutions assume a very precise local ordering so the regularization applied in PointNet I believe is trying to somehow learn an ordering in the first half of the network and then using it in the second half. I believe GNNs and Transformers are much better at this task than CNNs since they naturally operate on sets. Things like the SE(3) Transformer even (try to) encode 3D rotational symmetries into the architecture. A good data augmentation for this is 3D rotations so the network can try to learn to be invariant to this just like with CNN and rotated images.
Hey Cristian, thanks for this information! I agree it definitely seems like the Transformer would be better suited for this problem! I'll check out the SE(3), still very new to point cloud research haha! Interesting to see data augmentation in the geometric dl space, 3D rotations like neural radiance fields seems like it could be interesting for 2D image data as well!
Hi, could you bring a video on PointNET ++ or higher networks (SalsaNext, SPVNAS) regarding 3D semantic segmentation with lidar point cloud? I have my own point cloud data captured with velodyne lidar and i wanna know how to use one of this net to predict semantic segmentation on my own data, thank you !
But how can you explain and review work you did not understand? Did you check the validation accuracy? Did you plot the loss? Just take a moment to code a model Training and Validation loss, at least! Mate, this work on Keras is basically a fake replication of the original paper. The model is overfitting from the start to the end, and the results are just random.
Thank you for your video! Wonder where could we access the notebook.
Thanks for the video! I am not an expert in this, but the basic intuition is that point clouds are sets so there is no natural way of ordering them i.e. you can NOT identify which is point 0, 1, 2... n, consistently between samples. On the other hand, grid convolutions assume a very precise local ordering so the regularization applied in PointNet I believe is trying to somehow learn an ordering in the first half of the network and then using it in the second half.
I believe GNNs and Transformers are much better at this task than CNNs since they naturally operate on sets. Things like the SE(3) Transformer even (try to) encode 3D rotational symmetries into the architecture.
A good data augmentation for this is 3D rotations so the network can try to learn to be invariant to this just like with CNN and rotated images.
Hey Cristian, thanks for this information! I agree it definitely seems like the Transformer would be better suited for this problem! I'll check out the SE(3), still very new to point cloud research haha! Interesting to see data augmentation in the geometric dl space, 3D rotations like neural radiance fields seems like it could be interesting for 2D image data as well!
@@connor-shorten check out the "Point Transformer" published in 2020
thanks for the knowledge
Hi, could you bring a video on PointNET ++ or higher networks (SalsaNext, SPVNAS) regarding 3D semantic segmentation with lidar point cloud? I have my own point cloud data captured with velodyne lidar and i wanna know how to use one of this net to predict semantic segmentation on my own data, thank you !
how can i save th model ? to a .h5 file
why is the validation loss so high?
😀
Thanks, hope you find this useful!
why only 2048 points
But how can you explain and review work you did not understand? Did you check the validation accuracy? Did you plot the loss? Just take a moment to code a model Training and Validation loss, at least! Mate, this work on Keras is basically a fake replication of the original paper. The model is overfitting from the start to the end, and the results are just random.
Why is the validation loss so high and random?