Contrastive Learning in PyTorch - Part 1: Introduction

Поділитися
Вставка
  • Опубліковано 2 тра 2024
  • ▬▬ Notes ▬▬▬▬▬▬▬▬▬▬▬
    Two small things I realized when editing this video
    - SimCLR uses two separate augmented views as positive samples
    - Many frameworks have separate projection heads on the learned representations
    which transforms them additionally for the contrastive loss
    ▬▬ Papers/Sources ▬▬▬▬▬▬▬
    - Intro: sthalles.github.io/a-few-word...
    - Survey: arxiv.org/ftp/arxiv/papers/20...
    - Supervised Contrastive Learning: arxiv.org/abs/2004.11362
    - Contrastive Loss: / losses-explained-contr...
    - Triplet Loss: towardsdatascience.com/triple...
    - NT-Xent Loss: medium.datadriveninvestor.com...
    - SimCLR
    ▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬
    All Icons are from flaticon: www.flaticon.com/authors/freepik
    ▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
    Music from Uppbeat (free for Creators!):
    uppbeat.io/t/t-check/lemon-limes
    License code: KJ7PFP0HB9BWHJOF
    ▬▬ Used Images ▬▬▬▬▬▬▬▬▬▬▬
    All Images are from pixabay.com (Cats, Dogs, ...)
    and royalty-free.
    ▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
    00:00 Introduction
    00:22 Overview
    01:35 Supervised vs. Self-Supervised CL
    03:25 Applications
    04:24 Popular Papers
    06:17 Metric Learning
    07:21 Loss 1
    09:39 Loss 2
    10:54 Loss 3
    13:22 Variations between Losses
    13:42 Part 2 Outlook
    ▬▬ Support me if you like 🌟
    ►Website: deepfindr.github.io/
    ►Support me on Patreon: bit.ly/2Wed242
    ►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
    ►Coursera: imp.i384100.net/b31QyP
    ►Link to this channel: bit.ly/3zEqL1W
    ►E-Mail: deepfindr@gmail.com
    ▬▬ My equipment 💻
    - Microphone: amzn.to/3DVqB8H
    - Microphone mount: amzn.to/3BWUcOJ
    - Monitors: amzn.to/3G2Jjgr
    - Monitor mount: amzn.to/3AWGIAY
    - Height-adjustable table: amzn.to/3aUysXC
    - Ergonomic chair: amzn.to/3phQg7r
    - PC case: amzn.to/3jdlI2Y
    - GPU: amzn.to/3AWyzwy
    - Keyboard: amzn.to/2XskWHP
    - Bluelight filter glasses: amzn.to/3pj0fK2

КОМЕНТАРІ • 38

  • @philipmay9614
    @philipmay9614 Рік тому +12

    Cosine similarity is between 1 and -1 and not just between 0 and 1.

    • @DeepFindr
      @DeepFindr  Рік тому +5

      Oh yes, stupid mistake. Cosine is obviously also between -1 and 1.
      Thanks for pointing this out!

    • @DeepFindr
      @DeepFindr  Рік тому +14

      This will however not affect the general concept of this loss, because the exp will scale all negative terms into [0,1].

  • @buh357
    @buh357 Рік тому

    I recently discovered self-supervised learning.
    And starting to work on it.
    Your video helped me a lot.
    Thank you for the great explanation.

  • @HafeezUllah
    @HafeezUllah Рік тому +1

    man you have delivered the lecture extremely well

  • @mhadnanali
    @mhadnanali Рік тому

    looking forward to implementation.

  • @amortalbeing
    @amortalbeing Рік тому

    Loved this. Keep up the great work.
    Thanks lot

  • @thegimel
    @thegimel Рік тому +2

    Great video on a very interesting subject. I've read the Supervied Contrastive Learning paper recently since I'm trying to use it in a problem I'm working on. Excited to watch the next video!
    P.S. It would be cool if you could do a video (or series) on N-shot learning (few-, one- and zero-shot).

    • @DeepFindr
      @DeepFindr  Рік тому

      Thank you :)
      Thanks for the recommendation, I put it on the list!

  • @mafiamustafa
    @mafiamustafa Рік тому

    another amazing video

  • @user-sn4ws7qc7n
    @user-sn4ws7qc7n Місяць тому

    Thank you for this vedio. I learned alot.

  • @CollegeTasty
    @CollegeTasty Рік тому

    Thank you!

  • @Sciencehub-oq5go
    @Sciencehub-oq5go Рік тому

    Great video. Thanks. Could you please comment on some of the handlings of False Negatives?

  • @sakib.9419
    @sakib.9419 Рік тому

    sucha good video

  • @zhuangzhuanghe530
    @zhuangzhuanghe530 Рік тому

    This video is the best video I've ever seen

  • @Rfhbe1
    @Rfhbe1 Рік тому

    Hi. Thank you for video. I found defect in NT-Xent Loss formula: temperature should be in exponent. Also when you plug numbers into a formula you should add to the denominator what's in the numerator. Have a nice day!

    • @DeepFindr
      @DeepFindr  Рік тому

      Yeah, thanks for pointing out! I messed some things up regarding NT-Xent :D will do some corrections in the next part :)

  • @hussainmujtaba638
    @hussainmujtaba638 Рік тому

    amazing content

  • @PrajwalSingh15
    @PrajwalSingh15 Рік тому

    Awesome explanation thanks, just a small query about how long this series will be and the expected frequency of each release?

    • @DeepFindr
      @DeepFindr  Рік тому +1

      Thanks! I plan to upload the hands on part in latest 2 weeks. That will be final part of this introduction :)

  • @jamesgalante7967
    @jamesgalante7967 Рік тому

    Damn. You’re a good teacher

  • @nikosspyrou3890
    @nikosspyrou3890 Рік тому +1

    Great video!! Could you make also a video that will show us an implementation on how to do contrastive learning for semantic segmentation problem?

    • @DeepFindr
      @DeepFindr  Рік тому

      Thanks! Soon I'll upload the implementation for point clouds. It should be quite similar, just using other layer types.
      Or do you refer to any special variants of CL for semantic segmentation?

    • @nikosspyrou3890
      @nikosspyrou3890 Рік тому +1

      Thanks for your reply! Actually I would like to see experimentally an example of image segmentation dataset in which the contrastive loss(for example infoLoss) with a combination of a supervised loss such as cross entropy boost the performance of segmentation

    • @DeepFindr
      @DeepFindr  Рік тому

      I have to see if I find time, but it's certainly noted. Thanks for the suggestion!

  • @kornellewychan
    @kornellewychan Рік тому

    great

  • @eranjitkumar11
    @eranjitkumar11 Рік тому

    Thanks for your videos. Can you create a tutorial video on Deep Graph Infomax (maybe on the Cora dataset)? This will (besides be useful for me ;) ) tie up with your last subject on GNN with contrastive learning.

    • @DeepFindr
      @DeepFindr  Рік тому +1

      Yep, I've read the paper. Will note it down :) but the list is getting very loooong :D

  • @badrinathroysam5159
    @badrinathroysam5159 Рік тому

    The temperature term seems to be misplaced

    • @DeepFindr
      @DeepFindr  Рік тому

      Yes, pls see correction at the beginning of the second part :)

  • @vignatej663
    @vignatej663 Рік тому

    but the loss at 12:50 has to be 0.8/(0.8+0.2). As denominator has a sigma, I don't know why u did not add a 0.8 to denominator.

    • @DeepFindr
      @DeepFindr  Рік тому

      Yeah as mentioned in the second part I had some errors there :\

    • @The_Night_Knight
      @The_Night_Knight 7 місяців тому

      @@DeepFindr What if we used disentangled variational autoencoders to rotate 2d images by 3d means not just changing the color or rotation? The model would be able to generalize far better for far more different 3d angles with less data.

  • @user-wd7gv5jl9z
    @user-wd7gv5jl9z 28 днів тому

    Anyone from IISc B?