1:22 Results of SimCLR 3:03 Unsupervised Learning with Linear Evaluation 3:57 Semi-Supervised Learning 4:57 Transfer Learning 5:50 Benefits of Unsupervised Representation Learning 6:40 Overview of SimCLR 8:45 Data Augmentation 13:15 Projection to Contrastive Loss Function 14:40 SimCLR Pseudocode 16:26 Scaling up Unsupervised Learning 18:14 Note on Contrastive Loss Functions
Hi Henry, I am new in unsupervised learning, and I read their paper and observed your video. However, I have a short question, what to do next after the network is trained, and we get the representation "hi" of all images? How are the predictions or clustering made basing on the representation hi? It seems that it's not mentioned everywhere, so I guess there is a default method in this domain that I don't know. To me, the network is like a feature extractor to extract representations of the images, and there should be something like k-means to cluster the representations.
It might help to think of h as a much smaller vector that represents the original image tensor (HxWxC). This new vector h is then used as features, such as what you would see in an introduction to machine learning model like classifying loans based on a set of features or maybe like the flowers tabular data classifiers. You could cluster these representations with k-means if you want, but in my opinion it would be kind of useless, because k-means doesn't really preserve dimensionality well, these vectors will likely be like 2048x1, algorithms like t-SNE or better, UMAP, handle dimensionality reduction with these kinds of vectors much better.
Interesting. For most training protocols in metric learning, the "similar" pairs are formed by sampling images from the same class, however for this they are forming the similar pairs using an image transformation against the same image? How does this assure that all examples in the same class are clustered together?
Hi Henry, do you have an email address where I can contact you? I do have some questions on the Reinforcement Learning Topic and I think you could help me out a lot
1:22 Results of SimCLR
3:03 Unsupervised Learning with Linear Evaluation
3:57 Semi-Supervised Learning
4:57 Transfer Learning
5:50 Benefits of Unsupervised Representation Learning
6:40 Overview of SimCLR
8:45 Data Augmentation
13:15 Projection to Contrastive Loss Function
14:40 SimCLR Pseudocode
16:26 Scaling up Unsupervised Learning
18:14 Note on Contrastive Loss Functions
my first time watching a youtube with original playback speed! great job!
Hi Henry, I am new in unsupervised learning, and I read their paper and observed your video. However, I have a short question, what to do next after the network is trained, and we get the representation "hi" of all images? How are the predictions or clustering made basing on the representation hi? It seems that it's not mentioned everywhere, so I guess there is a default method in this domain that I don't know. To me, the network is like a feature extractor to extract representations of the images, and there should be something like k-means to cluster the representations.
It might help to think of h as a much smaller vector that represents the original image tensor (HxWxC). This new vector h is then used as features, such as what you would see in an introduction to machine learning model like classifying loans based on a set of features or maybe like the flowers tabular data classifiers. You could cluster these representations with k-means if you want, but in my opinion it would be kind of useless, because k-means doesn't really preserve dimensionality well, these vectors will likely be like 2048x1, algorithms like t-SNE or better, UMAP, handle dimensionality reduction with these kinds of vectors much better.
Interesting. For most training protocols in metric learning, the "similar" pairs are formed by sampling images from the same class, however for this they are forming the similar pairs using an image transformation against the same image? How does this assure that all examples in the same class are clustered together?
It would be cool if you added links to your videos that talk about papers you cite in the video, e.g. ImageBERT here
Thank you for the suggestion! I usually either do this or put an image of the title of the paper and the authors.
Great and laconic explanation, thank you!
Useful video, thanks! Keep up the good work
Thank you!!
U speak super fast....
Hi Henry,
do you have an email address where I can contact you?
I do have some questions on the Reinforcement Learning Topic and I think you could help me out a lot
awesome!