Contrastive Learning in PyTorch - Part 2: CL on Point Clouds

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ • 33

  • @mkamp
    @mkamp 2 роки тому +9

    It’s earning a lot of trust how you corrected the oops from your last video. Kudos.

    • @mkamp
      @mkamp 2 роки тому +1

      FWIW when applying the cos sim to BOW (counts, positive numbers), the value would be between 0 and 1.

    • @DeepFindr
      @DeepFindr  2 роки тому +1

      Thanks!

  • @yuanluo6619
    @yuanluo6619 10 місяців тому +1

    Great work! Combined with the part 1, a very complete picture of how contrastive learning is presented. Thanks. It helps a lot.

  • @amirrezafarnoosh4259
    @amirrezafarnoosh4259 Рік тому +1

    Great tutorial! In my opinion, to enhance the embedding evaluation part, it's crucial to establish a solid baseline. One effective approach would involve applying TSNE directly to the (high-dimensional) point cloud data (e.g., after applying a simple permutation-invariant operation). By comparing these TSNE plots with the ones generated from the learned embeddings, we can effectively gauge the impact of the contrastive deep learning framework on the separation performance. This would provide a fair assessment of how well the framework has improved the data representation.

  • @thegimel
    @thegimel 2 роки тому +3

    Great followup video. I really like the way you speak simply about whatever concept you are explaining. I think it has good pedagogical value. Keep it up 👍

    • @DeepFindr
      @DeepFindr  2 роки тому +1

      Thank you for the kind words! :)

  • @evgenii.v
    @evgenii.v Рік тому

    Very cool videos, thank you man!
    Your channel is truly underrated, keep it going!

  • @hmind9836
    @hmind9836 2 роки тому +1

    You're awesome! Thanks for making these videos and sharing your knowledge. I hope you keep creating this kind of content, I'll keep watching!!!

    • @DeepFindr
      @DeepFindr  2 роки тому +1

      Thank you :) I'll keep them coming!

  • @divelix2666
    @divelix2666 7 місяців тому

    Very useful and informative video, especially PointNet and batch size parts. Special thanks for using point cloud domain!

    • @DeepFindr
      @DeepFindr  7 місяців тому

      Glad it was useful! :)

  • @elviska5173
    @elviska5173 2 роки тому +2

    Very excellent tutorials! Thank you very much for your generous help to beginners like me.

    • @DeepFindr
      @DeepFindr  2 роки тому +1

      Glad that you liked it!

  • @lores1413
    @lores1413 2 роки тому

    Thank you so much. Pleas keep positing

  • @juikaiwang
    @juikaiwang Рік тому +1

    Keep up the good work. You are amazing. You have the talent for teaching! It would be fantastic if you can implement something like DINO!

  • @mohammadnafie3327
    @mohammadnafie3327 Рік тому

    The quality of your videos and explanation is amazing! Actually, I am really thinking about requesting your expertise. Any work is going to be done on GNN Contrastive Learning? I am very interested into the idea of extracting invariant features from GNN embeddings? Keep up the good work! ^_^

  • @MrXboy3x
    @MrXboy3x Рік тому

    good video, but 1 thing, I do not think voxel is a sparse representation. sparse is happened when points has distance betwen them like low resolution point cloud. mesh and voxel does not have this effect!

  • @GayalKuruppu
    @GayalKuruppu 4 місяці тому

    What's the similarity between NTXentLoss (or InfoNCE) and SimCLR loss?

  • @nikosspyrou3890
    @nikosspyrou3890 2 роки тому

    Great explanation! In the case of semantic segmentation how can we calculate the contrastive loss for pixel embeddings?

    • @DeepFindr
      @DeepFindr  2 роки тому +1

      Have you seen this explanation? towardsdatascience.com/pixel-level-dense-contrastive-learning-6558691eeb67

  • @abdullahziker
    @abdullahziker 2 роки тому

    Love the content ....

  • @NouhaShab
    @NouhaShab Рік тому

    useful video, question: why in test time no augmentations are applied?

    • @DeepFindr
      @DeepFindr  Рік тому

      Hi, augmentations are only used to introduce more variance during training (basically to generate more samples to make the model equivariant to different transformations). During test time you simply want to evaluate your real data.

    • @NouhaShab
      @NouhaShab Рік тому

      @@DeepFindrThank you for the prompt answer

  • @zahra7619
    @zahra7619 9 місяців тому

    in section "install torch"
    what is link?

  • @eranjitkumar11
    @eranjitkumar11 2 роки тому

    Thanks for the video. Will the next be a Part 3? With a Infomax (Contrastive GNN) tutorial ? ;)

    • @DeepFindr
      @DeepFindr  2 роки тому

      I'm actually thinking about it :D but I wonder what additional value it adds. The only difference are augmentations and the data mode. Is there anything in particular you are interested in?

    • @eranjitkumar11
      @eranjitkumar11 2 роки тому +1

      1) In short: There are many things
      2) Sorry for my short answer. ;) A bit more seriously, I'm still trying to improve my skills to go from reading papers to reproducing results. Here are video tutorials and blog posts very helpful for me. In particular, your series on GNN (a new subject for me) was very helpful (especially your "Understanding Graph Attention Networks" video, which helped me a lot with the math part of GNN). I want to work on self-supervised GNN (starting with infomax), but for that, I need a better understanding of the math part. For that, it will be very helpful to have "explained" "working code" (as you do) which I can compare with my understanding of the paper. Sorry for my rambling, but hope it increased the chances of you making part 3: Infomax tutorial. ;)

  • @loveislulu264
    @loveislulu264 Рік тому

    Can it be implemented in text data as well? IF so can you make a vid for it?

    • @loveislulu264
      @loveislulu264 Рік тому

      By the way the xplanation was very good, thank you

  • @buh357
    @buh357 Рік тому

    awesome.

  • @hussainshaik4390
    @hussainshaik4390 11 місяців тому

    Good explanation but bad example which is difficult to follow the point could. I wish you would have picked 2d images instead