Contrastive Learning in PyTorch - Part 1: Introduction
Вставка
- Опубліковано 2 тра 2024
- ▬▬ Notes ▬▬▬▬▬▬▬▬▬▬▬
Two small things I realized when editing this video
- SimCLR uses two separate augmented views as positive samples
- Many frameworks have separate projection heads on the learned representations
which transforms them additionally for the contrastive loss
▬▬ Papers/Sources ▬▬▬▬▬▬▬
- Intro: sthalles.github.io/a-few-word...
- Survey: arxiv.org/ftp/arxiv/papers/20...
- Supervised Contrastive Learning: arxiv.org/abs/2004.11362
- Contrastive Loss: / losses-explained-contr...
- Triplet Loss: towardsdatascience.com/triple...
- NT-Xent Loss: medium.datadriveninvestor.com...
- SimCLR
▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬
All Icons are from flaticon: www.flaticon.com/authors/freepik
▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
Music from Uppbeat (free for Creators!):
uppbeat.io/t/t-check/lemon-limes
License code: KJ7PFP0HB9BWHJOF
▬▬ Used Images ▬▬▬▬▬▬▬▬▬▬▬
All Images are from pixabay.com (Cats, Dogs, ...)
and royalty-free.
▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction
00:22 Overview
01:35 Supervised vs. Self-Supervised CL
03:25 Applications
04:24 Popular Papers
06:17 Metric Learning
07:21 Loss 1
09:39 Loss 2
10:54 Loss 3
13:22 Variations between Losses
13:42 Part 2 Outlook
▬▬ Support me if you like 🌟
►Website: deepfindr.github.io/
►Support me on Patreon: bit.ly/2Wed242
►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
►Coursera: imp.i384100.net/b31QyP
►Link to this channel: bit.ly/3zEqL1W
►E-Mail: deepfindr@gmail.com
▬▬ My equipment 💻
- Microphone: amzn.to/3DVqB8H
- Microphone mount: amzn.to/3BWUcOJ
- Monitors: amzn.to/3G2Jjgr
- Monitor mount: amzn.to/3AWGIAY
- Height-adjustable table: amzn.to/3aUysXC
- Ergonomic chair: amzn.to/3phQg7r
- PC case: amzn.to/3jdlI2Y
- GPU: amzn.to/3AWyzwy
- Keyboard: amzn.to/2XskWHP
- Bluelight filter glasses: amzn.to/3pj0fK2
Cosine similarity is between 1 and -1 and not just between 0 and 1.
Oh yes, stupid mistake. Cosine is obviously also between -1 and 1.
Thanks for pointing this out!
This will however not affect the general concept of this loss, because the exp will scale all negative terms into [0,1].
I recently discovered self-supervised learning.
And starting to work on it.
Your video helped me a lot.
Thank you for the great explanation.
man you have delivered the lecture extremely well
looking forward to implementation.
Loved this. Keep up the great work.
Thanks lot
Thank you!
Great video on a very interesting subject. I've read the Supervied Contrastive Learning paper recently since I'm trying to use it in a problem I'm working on. Excited to watch the next video!
P.S. It would be cool if you could do a video (or series) on N-shot learning (few-, one- and zero-shot).
Thank you :)
Thanks for the recommendation, I put it on the list!
another amazing video
Thank you for this vedio. I learned alot.
Thank you!
Great video. Thanks. Could you please comment on some of the handlings of False Negatives?
sucha good video
This video is the best video I've ever seen
Thanks!!
Hi. Thank you for video. I found defect in NT-Xent Loss formula: temperature should be in exponent. Also when you plug numbers into a formula you should add to the denominator what's in the numerator. Have a nice day!
Yeah, thanks for pointing out! I messed some things up regarding NT-Xent :D will do some corrections in the next part :)
amazing content
Thank you!
Awesome explanation thanks, just a small query about how long this series will be and the expected frequency of each release?
Thanks! I plan to upload the hands on part in latest 2 weeks. That will be final part of this introduction :)
Damn. You’re a good teacher
Thanks, appreciated!!
Great video!! Could you make also a video that will show us an implementation on how to do contrastive learning for semantic segmentation problem?
Thanks! Soon I'll upload the implementation for point clouds. It should be quite similar, just using other layer types.
Or do you refer to any special variants of CL for semantic segmentation?
Thanks for your reply! Actually I would like to see experimentally an example of image segmentation dataset in which the contrastive loss(for example infoLoss) with a combination of a supervised loss such as cross entropy boost the performance of segmentation
I have to see if I find time, but it's certainly noted. Thanks for the suggestion!
great
Thanks for your videos. Can you create a tutorial video on Deep Graph Infomax (maybe on the Cora dataset)? This will (besides be useful for me ;) ) tie up with your last subject on GNN with contrastive learning.
Yep, I've read the paper. Will note it down :) but the list is getting very loooong :D
The temperature term seems to be misplaced
Yes, pls see correction at the beginning of the second part :)
but the loss at 12:50 has to be 0.8/(0.8+0.2). As denominator has a sigma, I don't know why u did not add a 0.8 to denominator.
Yeah as mentioned in the second part I had some errors there :\
@@DeepFindr What if we used disentangled variational autoencoders to rotate 2d images by 3d means not just changing the color or rotation? The model would be able to generalize far better for far more different 3d angles with less data.
Anyone from IISc B?