Hey, when you say we pass it through the same neural network, do you mean that we create and train 2 instances of the same model, each with its own "prediction" and then use each representation for maximizing agreement? Sort of done in parallel, or is the same instance of the model that creates both representations and it is done sequentially? If the first is the case, does that mean that during inference time we use multiple models to make predictions?
you know how they use fiilters in music apps where there is a series of filters at top of each other. what if we did that to neural net. each network oculd do some modification differently like a series of modules serving different purpose where you feed random input to the first network and get totally separated object in the last one with the right labels. the rest you have to figure out yourself.
Great work! I would like to share as constructive criticism that I still struggled to get intuition on what a negative sample is or could be in this case. Anyone could clarify it for me?
Great work! Your explanation is highly comprehensive and well-structured.
Really appreciate your efforts in understanding the concepts easily. Thanks
This channel is perfect
🤩 Thank you!!
Hey, when you say we pass it through the same neural network, do you mean that we create and train 2 instances of the same model, each with its own "prediction" and then use each representation for maximizing agreement? Sort of done in parallel, or is the same instance of the model that creates both representations and it is done sequentially? If the first is the case, does that mean that during inference time we use multiple models to make predictions?
you know how they use fiilters in music apps where there is a series of filters at top of each other. what if we did that to neural net. each network oculd do some modification differently like a series of modules serving different purpose where you feed random input to the first network and get totally separated object in the last one with the right labels. the rest you have to figure out yourself.
Great content 👏🏼👏🏼
🙋🏼♂️
Keep It Up you make good content.
Thanks, I‘ll do my best!
Which content do you enjoy the most? 😬💛
Hello @borismeinardus, your content is great but could you please create video on implementation of training and fine-tuning based on simCLR
Great work! I would like to share as constructive criticism that I still struggled to get intuition on what a negative sample is or could be in this case. Anyone could clarify it for me?
Insigtful
Thank you!
Love your content but your soundtrack is irrelevant and distracting.
Noted! Will see if I can find a better/ calmer one and make it more quiet :)
@@borismeinardus Honestly you don't need any of it -- Do Neurips talks have soundtracks?
@@gondwana6303 Yeah, they don't. That said, this is a youtube video and I try to make the video a bit more "friendly" than a Neurips talk haha