DINO: Emerging Properties in Self-Supervised Vision Transformers

Поділитися
Вставка
  • Опубліковано 15 чер 2024
  • Presenter: Michael Zhang
    Affiliation: Stanford University
    Article's title: DINO: Emerging Properties in Self-Supervised Vision Transformers
    Authors: Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin
    Institutions: Facebook AI Research, Inria, Sorbonne University
    Paper: arxiv.org/abs/2104.14294
    Article's abstract:
    "In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) [18] that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder [31], multi-crop training [10], and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base."
  • Розваги

КОМЕНТАРІ • 16

  • @adizhol
    @adizhol 3 роки тому +6

    The attention maps visualization is from the output CLS token

  • @mathildecaron1821
    @mathildecaron1821 3 роки тому +4

    Nice video! Minor remark on the last question: we do show comparison with other self-supervised losses for Jaccard distance with deit-S 16x16 in Appendix :). Our conclusion is that the segmented heat maps appear for all the SSL works we experimented with!

    • @stanfordcontrastivesslearn3141
      @stanfordcontrastivesslearn3141  3 роки тому

      Thank you for the comment, that clarifies it! And keep up the good work. Reviewing the paper was very nice!

  • @AbcDef-xm9rp
    @AbcDef-xm9rp 3 роки тому +2

    It's good to see that you guys are explaining latest SOTA techniques. Keep up the good work guys!

  • @piku1920
    @piku1920 3 роки тому +2

    Hi- For visualisation of masks, it is mentioned in the paper that the mask is obtained by thresholding the self attention maps to keep 60% of the mass. What does the mass represent here? Can you please explain this thresholding technique a bit. Thank you

  • @kartiksachdev8807
    @kartiksachdev8807 3 роки тому +1

    Great explanation guys!! Is there a slack or discord channel where I could connect with you and contribute in the future?

    • @stanfordcontrastivesslearn3141
      @stanfordcontrastivesslearn3141  3 роки тому

      Hi Kartik, thank you! Very nice to hear that you want to contribute too! Would you like to only participate in the discussion or also present a paper yourself?

    • @kartiksachdev8807
      @kartiksachdev8807 3 роки тому

      @@stanfordcontrastivesslearn3141 thank you for the reply! I would like to present a paper. If that's possible?

    • @stanfordcontrastivesslearn3141
      @stanfordcontrastivesslearn3141  3 роки тому

      @@kartiksachdev8807 Do you already know what paper you would like to present?

    • @kartiksachdev8807
      @kartiksachdev8807 3 роки тому

      @@stanfordcontrastivesslearn3141 yes, I have one paper in mind.

    • @stanfordcontrastivesslearn3141
      @stanfordcontrastivesslearn3141  3 роки тому

      @@kartiksachdev8807 Ok nice! You can write us at stanfordcontrastivelearning [at] gmail.com, send us a bio, let us know what article you would like to present, and we will give you the instructions.

  • @prof_shixo
    @prof_shixo 3 роки тому +1

    Is this group open to scientific comments or not?!!!! I put a critic for the ViT method and it has been deleted, really weird behaviour!

    • @stanfordcontrastivesslearn3141
      @stanfordcontrastivesslearn3141  3 роки тому +5

      Hi Sherif, yes this channel is very open to scientific comments and feedback from the community, thank you very much for participating. I am not sure what happened with your comment. I am only able to see the beginning of your comment in the channel notifications. Do you mind trying to post it again? I suspect it may have been automatically deleted for some reason. I see that we got the notification about your comment twice, so my best guess right now would be that you submitted it twice by accident and that it was detected as spam. But that's just a wild guess. If you are still having issues, just send your comment to stanfordcontrastivelearning [at] gmail.com and we will repost it with quotation marks. We do not want to censor anybody!

  • @noamzilo6730
    @noamzilo6730 Рік тому

    This is, like, impossible to, like, listen to, right?