Great video on a very interesting subject. I've read the Supervied Contrastive Learning paper recently since I'm trying to use it in a problem I'm working on. Excited to watch the next video! P.S. It would be cool if you could do a video (or series) on N-shot learning (few-, one- and zero-shot).
Hi. Thank you for video. I found defect in NT-Xent Loss formula: temperature should be in exponent. Also when you plug numbers into a formula you should add to the denominator what's in the numerator. Have a nice day!
Thanks! Soon I'll upload the implementation for point clouds. It should be quite similar, just using other layer types. Or do you refer to any special variants of CL for semantic segmentation?
Thanks for your reply! Actually I would like to see experimentally an example of image segmentation dataset in which the contrastive loss(for example infoLoss) with a combination of a supervised loss such as cross entropy boost the performance of segmentation
Thanks for your videos. Can you create a tutorial video on Deep Graph Infomax (maybe on the Cora dataset)? This will (besides be useful for me ;) ) tie up with your last subject on GNN with contrastive learning.
@@DeepFindr What if we used disentangled variational autoencoders to rotate 2d images by 3d means not just changing the color or rotation? The model would be able to generalize far better for far more different 3d angles with less data.
Cosine similarity is between 1 and -1 and not just between 0 and 1.
Oh yes, stupid mistake. Cosine is obviously also between -1 and 1.
Thanks for pointing this out!
This will however not affect the general concept of this loss, because the exp will scale all negative terms into [0,1].
man you have delivered the lecture extremely well
I recently discovered self-supervised learning.
And starting to work on it.
Your video helped me a lot.
Thank you for the great explanation.
Fantastic explanation. You made it look easy!
This video is the best video I've ever seen
Thanks!!
Loved this. Keep up the great work.
Thanks lot
Thank you!
looking forward to implementation.
Great video on a very interesting subject. I've read the Supervied Contrastive Learning paper recently since I'm trying to use it in a problem I'm working on. Excited to watch the next video!
P.S. It would be cool if you could do a video (or series) on N-shot learning (few-, one- and zero-shot).
Thank you :)
Thanks for the recommendation, I put it on the list!
Thank you for this vedio. I learned alot.
Damn. You’re a good teacher
Thanks, appreciated!!
another amazing video
Hi. Thank you for video. I found defect in NT-Xent Loss formula: temperature should be in exponent. Also when you plug numbers into a formula you should add to the denominator what's in the numerator. Have a nice day!
Yeah, thanks for pointing out! I messed some things up regarding NT-Xent :D will do some corrections in the next part :)
Great video!! Could you make also a video that will show us an implementation on how to do contrastive learning for semantic segmentation problem?
Thanks! Soon I'll upload the implementation for point clouds. It should be quite similar, just using other layer types.
Or do you refer to any special variants of CL for semantic segmentation?
Thanks for your reply! Actually I would like to see experimentally an example of image segmentation dataset in which the contrastive loss(for example infoLoss) with a combination of a supervised loss such as cross entropy boost the performance of segmentation
I have to see if I find time, but it's certainly noted. Thanks for the suggestion!
Great video. Thanks. Could you please comment on some of the handlings of False Negatives?
Awesome explanation thanks, just a small query about how long this series will be and the expected frequency of each release?
Thanks! I plan to upload the hands on part in latest 2 weeks. That will be final part of this introduction :)
amazing content
Thank you!
sucha good video
Thank you!
Thanks for your videos. Can you create a tutorial video on Deep Graph Infomax (maybe on the Cora dataset)? This will (besides be useful for me ;) ) tie up with your last subject on GNN with contrastive learning.
Yep, I've read the paper. Will note it down :) but the list is getting very loooong :D
but the loss at 12:50 has to be 0.8/(0.8+0.2). As denominator has a sigma, I don't know why u did not add a 0.8 to denominator.
Yeah as mentioned in the second part I had some errors there :\
@@DeepFindr What if we used disentangled variational autoencoders to rotate 2d images by 3d means not just changing the color or rotation? The model would be able to generalize far better for far more different 3d angles with less data.
The temperature term seems to be misplaced
Yes, pls see correction at the beginning of the second part :)
great
Anyone from IISc B?