Contrastive Learning in PyTorch - Part 1: Introduction

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ • 39

  • @philipmay9614
    @philipmay9614 2 роки тому +15

    Cosine similarity is between 1 and -1 and not just between 0 and 1.

    • @DeepFindr
      @DeepFindr  2 роки тому +6

      Oh yes, stupid mistake. Cosine is obviously also between -1 and 1.
      Thanks for pointing this out!

    • @DeepFindr
      @DeepFindr  2 роки тому +16

      This will however not affect the general concept of this loss, because the exp will scale all negative terms into [0,1].

  • @HafeezUllah
    @HafeezUllah Рік тому +1

    man you have delivered the lecture extremely well

  • @buh357
    @buh357 Рік тому

    I recently discovered self-supervised learning.
    And starting to work on it.
    Your video helped me a lot.
    Thank you for the great explanation.

  • @rajeshve7211
    @rajeshve7211 4 місяці тому

    Fantastic explanation. You made it look easy!

  • @zhuangzhuanghe530
    @zhuangzhuanghe530 2 роки тому

    This video is the best video I've ever seen

  • @amortalbeing
    @amortalbeing 2 роки тому

    Loved this. Keep up the great work.
    Thanks lot

  • @mhadnanali
    @mhadnanali 2 роки тому

    looking forward to implementation.

  • @thegimel
    @thegimel 2 роки тому +2

    Great video on a very interesting subject. I've read the Supervied Contrastive Learning paper recently since I'm trying to use it in a problem I'm working on. Excited to watch the next video!
    P.S. It would be cool if you could do a video (or series) on N-shot learning (few-, one- and zero-shot).

    • @DeepFindr
      @DeepFindr  2 роки тому

      Thank you :)
      Thanks for the recommendation, I put it on the list!

  • @謝佳雯-p6r
    @謝佳雯-p6r 9 місяців тому

    Thank you for this vedio. I learned alot.

  • @jamesgalante7967
    @jamesgalante7967 2 роки тому

    Damn. You’re a good teacher

  • @mafiamustafa
    @mafiamustafa 2 роки тому

    another amazing video

  • @Rfhbe1
    @Rfhbe1 2 роки тому

    Hi. Thank you for video. I found defect in NT-Xent Loss formula: temperature should be in exponent. Also when you plug numbers into a formula you should add to the denominator what's in the numerator. Have a nice day!

    • @DeepFindr
      @DeepFindr  2 роки тому

      Yeah, thanks for pointing out! I messed some things up regarding NT-Xent :D will do some corrections in the next part :)

  • @nikosspyrou3890
    @nikosspyrou3890 2 роки тому +1

    Great video!! Could you make also a video that will show us an implementation on how to do contrastive learning for semantic segmentation problem?

    • @DeepFindr
      @DeepFindr  2 роки тому

      Thanks! Soon I'll upload the implementation for point clouds. It should be quite similar, just using other layer types.
      Or do you refer to any special variants of CL for semantic segmentation?

    • @nikosspyrou3890
      @nikosspyrou3890 2 роки тому +1

      Thanks for your reply! Actually I would like to see experimentally an example of image segmentation dataset in which the contrastive loss(for example infoLoss) with a combination of a supervised loss such as cross entropy boost the performance of segmentation

    • @DeepFindr
      @DeepFindr  2 роки тому

      I have to see if I find time, but it's certainly noted. Thanks for the suggestion!

  • @Sciencehub-oq5go
    @Sciencehub-oq5go Рік тому

    Great video. Thanks. Could you please comment on some of the handlings of False Negatives?

  • @PrajwalSingh15
    @PrajwalSingh15 2 роки тому

    Awesome explanation thanks, just a small query about how long this series will be and the expected frequency of each release?

    • @DeepFindr
      @DeepFindr  2 роки тому +1

      Thanks! I plan to upload the hands on part in latest 2 weeks. That will be final part of this introduction :)

  • @hussainmujtaba638
    @hussainmujtaba638 2 роки тому

    amazing content

  • @sakib.9419
    @sakib.9419 2 роки тому

    sucha good video

  • @CollegeTasty
    @CollegeTasty 2 роки тому

    Thank you!

  • @eranjitkumar11
    @eranjitkumar11 2 роки тому

    Thanks for your videos. Can you create a tutorial video on Deep Graph Infomax (maybe on the Cora dataset)? This will (besides be useful for me ;) ) tie up with your last subject on GNN with contrastive learning.

    • @DeepFindr
      @DeepFindr  2 роки тому +1

      Yep, I've read the paper. Will note it down :) but the list is getting very loooong :D

  • @vignatej663
    @vignatej663 Рік тому

    but the loss at 12:50 has to be 0.8/(0.8+0.2). As denominator has a sigma, I don't know why u did not add a 0.8 to denominator.

    • @DeepFindr
      @DeepFindr  Рік тому

      Yeah as mentioned in the second part I had some errors there :\

    • @The_Night_Knight
      @The_Night_Knight Рік тому

      @@DeepFindr What if we used disentangled variational autoencoders to rotate 2d images by 3d means not just changing the color or rotation? The model would be able to generalize far better for far more different 3d angles with less data.

  • @badrinathroysam5159
    @badrinathroysam5159 Рік тому

    The temperature term seems to be misplaced

    • @DeepFindr
      @DeepFindr  Рік тому

      Yes, pls see correction at the beginning of the second part :)

  • @kornellewychan
    @kornellewychan 2 роки тому

    great

  • @INGLERAJKAMALRAJENDRA
    @INGLERAJKAMALRAJENDRA 8 місяців тому

    Anyone from IISc B?