Contrastive Learning in PyTorch - Part 2: CL on Point Clouds
Вставка
- Опубліковано 20 тра 2024
- ▬▬ Papers/Sources ▬▬▬▬▬▬▬
- Colab Notebook: colab.research.google.com/dri...
- SimCLRv2: arxiv.org/pdf/2006.10029.pdf
- PointNet: arxiv.org/pdf/1612.00593.pdf
- PointNet++: arxiv.org/pdf/1706.02413.pdf
- EdgeConv: arxiv.org/pdf/1801.07829.pdf
- Contrastive Learning Survey: arxiv.org/ftp/arxiv/papers/20...
▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬
All Icons are from flaticon: www.flaticon.com/authors/freepik
▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
Music from Uppbeat (free for Creators!):
uppbeat.io/t/t-check/lemon-limes
License code: KJ7PFP0HB9BWHJOF
▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction
00:22 Errors from last video
01:41 Notebook Setup [CODE]
02:42 Dataset Intro [CODE]
05:07 Augmentations and Bias
06:26 Augmentations [CODE]
09:12 Machine Learning on Point Clouds
11:48 PointNet
13:30 PointNet-pp
14:32 EdgeConv
15:53 Other Methods
16:09 Model Architecture
17:25 Model Implementation [CODE]
20:11 Training [CODE]
21:05 Batch sizes in CL
22:00 Training cont [CODE]
22:40 Batching in CL
23:15 Training cont [CODE]
24:08 Embedding evaluation
27:00 Outro
▬▬ Support me if you like 🌟
►Website: deepfindr.github.io/
►Support me on Patreon: bit.ly/2Wed242
►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
►Coursera: imp.i384100.net/b31QyP
►Link to this channel: bit.ly/3zEqL1W
►E-Mail: deepfindr@gmail.com
▬▬ My equipment 💻
- Microphone: amzn.to/3DVqB8H
- Microphone mount: amzn.to/3BWUcOJ
- Monitors: amzn.to/3G2Jjgr
- Monitor mount: amzn.to/3AWGIAY
- Height-adjustable table: amzn.to/3aUysXC
- Ergonomic chair: amzn.to/3phQg7r
- PC case: amzn.to/3jdlI2Y
- GPU: amzn.to/3AWyzwy
- Keyboard: amzn.to/2XskWHP
- Bluelight filter glasses: amzn.to/3pj0fK2
It’s earning a lot of trust how you corrected the oops from your last video. Kudos.
FWIW when applying the cos sim to BOW (counts, positive numbers), the value would be between 0 and 1.
Thanks!
Very cool videos, thank you man!
Your channel is truly underrated, keep it going!
Great work! Combined with the part 1, a very complete picture of how contrastive learning is presented. Thanks. It helps a lot.
Great tutorial! In my opinion, to enhance the embedding evaluation part, it's crucial to establish a solid baseline. One effective approach would involve applying TSNE directly to the (high-dimensional) point cloud data (e.g., after applying a simple permutation-invariant operation). By comparing these TSNE plots with the ones generated from the learned embeddings, we can effectively gauge the impact of the contrastive deep learning framework on the separation performance. This would provide a fair assessment of how well the framework has improved the data representation.
Very useful and informative video, especially PointNet and batch size parts. Special thanks for using point cloud domain!
Glad it was useful! :)
You're awesome! Thanks for making these videos and sharing your knowledge. I hope you keep creating this kind of content, I'll keep watching!!!
Thank you :) I'll keep them coming!
Thank you so much. Pleas keep positing
Great followup video. I really like the way you speak simply about whatever concept you are explaining. I think it has good pedagogical value. Keep it up 👍
Thank you for the kind words! :)
Keep up the good work. You are amazing. You have the talent for teaching! It would be fantastic if you can implement something like DINO!
Very excellent tutorials! Thank you very much for your generous help to beginners like me.
Glad that you liked it!
Love the content ....
The quality of your videos and explanation is amazing! Actually, I am really thinking about requesting your expertise. Any work is going to be done on GNN Contrastive Learning? I am very interested into the idea of extracting invariant features from GNN embeddings? Keep up the good work! ^_^
awesome.
good video, but 1 thing, I do not think voxel is a sparse representation. sparse is happened when points has distance betwen them like low resolution point cloud. mesh and voxel does not have this effect!
Great explanation! In the case of semantic segmentation how can we calculate the contrastive loss for pixel embeddings?
Have you seen this explanation? towardsdatascience.com/pixel-level-dense-contrastive-learning-6558691eeb67
Can it be implemented in text data as well? IF so can you make a vid for it?
By the way the xplanation was very good, thank you
Thanks for the video. Will the next be a Part 3? With a Infomax (Contrastive GNN) tutorial ? ;)
I'm actually thinking about it :D but I wonder what additional value it adds. The only difference are augmentations and the data mode. Is there anything in particular you are interested in?
1) In short: There are many things
2) Sorry for my short answer. ;) A bit more seriously, I'm still trying to improve my skills to go from reading papers to reproducing results. Here are video tutorials and blog posts very helpful for me. In particular, your series on GNN (a new subject for me) was very helpful (especially your "Understanding Graph Attention Networks" video, which helped me a lot with the math part of GNN). I want to work on self-supervised GNN (starting with infomax), but for that, I need a better understanding of the math part. For that, it will be very helpful to have "explained" "working code" (as you do) which I can compare with my understanding of the paper. Sorry for my rambling, but hope it increased the chances of you making part 3: Infomax tutorial. ;)
in section "install torch"
what is link?
useful video, question: why in test time no augmentations are applied?
Hi, augmentations are only used to introduce more variance during training (basically to generate more samples to make the model equivariant to different transformations). During test time you simply want to evaluate your real data.
@@DeepFindrThank you for the prompt answer
Good explanation but bad example which is difficult to follow the point could. I wish you would have picked 2d images instead