Great tutorial! In my opinion, to enhance the embedding evaluation part, it's crucial to establish a solid baseline. One effective approach would involve applying TSNE directly to the (high-dimensional) point cloud data (e.g., after applying a simple permutation-invariant operation). By comparing these TSNE plots with the ones generated from the learned embeddings, we can effectively gauge the impact of the contrastive deep learning framework on the separation performance. This would provide a fair assessment of how well the framework has improved the data representation.
Great followup video. I really like the way you speak simply about whatever concept you are explaining. I think it has good pedagogical value. Keep it up 👍
The quality of your videos and explanation is amazing! Actually, I am really thinking about requesting your expertise. Any work is going to be done on GNN Contrastive Learning? I am very interested into the idea of extracting invariant features from GNN embeddings? Keep up the good work! ^_^
good video, but 1 thing, I do not think voxel is a sparse representation. sparse is happened when points has distance betwen them like low resolution point cloud. mesh and voxel does not have this effect!
Hi, augmentations are only used to introduce more variance during training (basically to generate more samples to make the model equivariant to different transformations). During test time you simply want to evaluate your real data.
I'm actually thinking about it :D but I wonder what additional value it adds. The only difference are augmentations and the data mode. Is there anything in particular you are interested in?
1) In short: There are many things 2) Sorry for my short answer. ;) A bit more seriously, I'm still trying to improve my skills to go from reading papers to reproducing results. Here are video tutorials and blog posts very helpful for me. In particular, your series on GNN (a new subject for me) was very helpful (especially your "Understanding Graph Attention Networks" video, which helped me a lot with the math part of GNN). I want to work on self-supervised GNN (starting with infomax), but for that, I need a better understanding of the math part. For that, it will be very helpful to have "explained" "working code" (as you do) which I can compare with my understanding of the paper. Sorry for my rambling, but hope it increased the chances of you making part 3: Infomax tutorial. ;)
It’s earning a lot of trust how you corrected the oops from your last video. Kudos.
FWIW when applying the cos sim to BOW (counts, positive numbers), the value would be between 0 and 1.
Thanks!
Great work! Combined with the part 1, a very complete picture of how contrastive learning is presented. Thanks. It helps a lot.
Great tutorial! In my opinion, to enhance the embedding evaluation part, it's crucial to establish a solid baseline. One effective approach would involve applying TSNE directly to the (high-dimensional) point cloud data (e.g., after applying a simple permutation-invariant operation). By comparing these TSNE plots with the ones generated from the learned embeddings, we can effectively gauge the impact of the contrastive deep learning framework on the separation performance. This would provide a fair assessment of how well the framework has improved the data representation.
Great followup video. I really like the way you speak simply about whatever concept you are explaining. I think it has good pedagogical value. Keep it up 👍
Thank you for the kind words! :)
Very cool videos, thank you man!
Your channel is truly underrated, keep it going!
You're awesome! Thanks for making these videos and sharing your knowledge. I hope you keep creating this kind of content, I'll keep watching!!!
Thank you :) I'll keep them coming!
Very useful and informative video, especially PointNet and batch size parts. Special thanks for using point cloud domain!
Glad it was useful! :)
Very excellent tutorials! Thank you very much for your generous help to beginners like me.
Glad that you liked it!
Thank you so much. Pleas keep positing
Keep up the good work. You are amazing. You have the talent for teaching! It would be fantastic if you can implement something like DINO!
The quality of your videos and explanation is amazing! Actually, I am really thinking about requesting your expertise. Any work is going to be done on GNN Contrastive Learning? I am very interested into the idea of extracting invariant features from GNN embeddings? Keep up the good work! ^_^
good video, but 1 thing, I do not think voxel is a sparse representation. sparse is happened when points has distance betwen them like low resolution point cloud. mesh and voxel does not have this effect!
What's the similarity between NTXentLoss (or InfoNCE) and SimCLR loss?
Great explanation! In the case of semantic segmentation how can we calculate the contrastive loss for pixel embeddings?
Have you seen this explanation? towardsdatascience.com/pixel-level-dense-contrastive-learning-6558691eeb67
Love the content ....
useful video, question: why in test time no augmentations are applied?
Hi, augmentations are only used to introduce more variance during training (basically to generate more samples to make the model equivariant to different transformations). During test time you simply want to evaluate your real data.
@@DeepFindrThank you for the prompt answer
in section "install torch"
what is link?
Thanks for the video. Will the next be a Part 3? With a Infomax (Contrastive GNN) tutorial ? ;)
I'm actually thinking about it :D but I wonder what additional value it adds. The only difference are augmentations and the data mode. Is there anything in particular you are interested in?
1) In short: There are many things
2) Sorry for my short answer. ;) A bit more seriously, I'm still trying to improve my skills to go from reading papers to reproducing results. Here are video tutorials and blog posts very helpful for me. In particular, your series on GNN (a new subject for me) was very helpful (especially your "Understanding Graph Attention Networks" video, which helped me a lot with the math part of GNN). I want to work on self-supervised GNN (starting with infomax), but for that, I need a better understanding of the math part. For that, it will be very helpful to have "explained" "working code" (as you do) which I can compare with my understanding of the paper. Sorry for my rambling, but hope it increased the chances of you making part 3: Infomax tutorial. ;)
Can it be implemented in text data as well? IF so can you make a vid for it?
By the way the xplanation was very good, thank you
awesome.
Good explanation but bad example which is difficult to follow the point could. I wish you would have picked 2d images instead