SimCLR Explained!

Поділитися
Вставка
  • Опубліковано 17 лис 2024

КОМЕНТАРІ • 20

  • @connor-shorten
    @connor-shorten  4 роки тому +9

    1:22 Results of SimCLR
    3:03 Unsupervised Learning with Linear Evaluation
    3:57 Semi-Supervised Learning
    4:57 Transfer Learning
    5:50 Benefits of Unsupervised Representation Learning
    6:40 Overview of SimCLR
    8:45 Data Augmentation
    13:15 Projection to Contrastive Loss Function
    14:40 SimCLR Pseudocode
    16:26 Scaling up Unsupervised Learning
    18:14 Note on Contrastive Loss Functions

  • @maryamaghili1148
    @maryamaghili1148 2 роки тому +1

    my first time watching a youtube with original playback speed! great job!

  • @weitaotang5702
    @weitaotang5702 4 роки тому +3

    Hi Henry, I am new in unsupervised learning, and I read their paper and observed your video. However, I have a short question, what to do next after the network is trained, and we get the representation "hi" of all images? How are the predictions or clustering made basing on the representation hi? It seems that it's not mentioned everywhere, so I guess there is a default method in this domain that I don't know. To me, the network is like a feature extractor to extract representations of the images, and there should be something like k-means to cluster the representations.

    • @connor-shorten
      @connor-shorten  4 роки тому +3

      It might help to think of h as a much smaller vector that represents the original image tensor (HxWxC). This new vector h is then used as features, such as what you would see in an introduction to machine learning model like classifying loans based on a set of features or maybe like the flowers tabular data classifiers. You could cluster these representations with k-means if you want, but in my opinion it would be kind of useless, because k-means doesn't really preserve dimensionality well, these vectors will likely be like 2048x1, algorithms like t-SNE or better, UMAP, handle dimensionality reduction with these kinds of vectors much better.

  • @shairozsohail1059
    @shairozsohail1059 4 роки тому +1

    Interesting. For most training protocols in metric learning, the "similar" pairs are formed by sampling images from the same class, however for this they are forming the similar pairs using an image transformation against the same image? How does this assure that all examples in the same class are clustered together?

  • @GuillermoValleCosmos
    @GuillermoValleCosmos 4 роки тому +1

    It would be cool if you added links to your videos that talk about papers you cite in the video, e.g. ImageBERT here

    • @connor-shorten
      @connor-shorten  4 роки тому

      Thank you for the suggestion! I usually either do this or put an image of the title of the paper and the authors.

  • @wowlikefun
    @wowlikefun 4 роки тому

    Great and laconic explanation, thank you!

  • @Syzygyyy
    @Syzygyyy 4 роки тому +1

    Useful video, thanks! Keep up the good work

  • @jinusbordbar1264
    @jinusbordbar1264 3 роки тому +8

    U speak super fast....

  • @networkgangster1856
    @networkgangster1856 4 роки тому

    Hi Henry,
    do you have an email address where I can contact you?
    I do have some questions on the Reinforcement Learning Topic and I think you could help me out a lot

  • @diego.guillen
    @diego.guillen 4 роки тому

    awesome!