Contrastive Learning in PyTorch - Part 2: CL on Point Clouds

Поділитися
Вставка
  • Опубліковано 20 тра 2024
  • ▬▬ Papers/Sources ▬▬▬▬▬▬▬
    - Colab Notebook: colab.research.google.com/dri...
    - SimCLRv2: arxiv.org/pdf/2006.10029.pdf
    - PointNet: arxiv.org/pdf/1612.00593.pdf
    - PointNet++: arxiv.org/pdf/1706.02413.pdf
    - EdgeConv: arxiv.org/pdf/1801.07829.pdf
    - Contrastive Learning Survey: arxiv.org/ftp/arxiv/papers/20...
    ▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬
    All Icons are from flaticon: www.flaticon.com/authors/freepik
    ▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
    Music from Uppbeat (free for Creators!):
    uppbeat.io/t/t-check/lemon-limes
    License code: KJ7PFP0HB9BWHJOF
    ▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
    00:00 Introduction
    00:22 Errors from last video
    01:41 Notebook Setup [CODE]
    02:42 Dataset Intro [CODE]
    05:07 Augmentations and Bias
    06:26 Augmentations [CODE]
    09:12 Machine Learning on Point Clouds
    11:48 PointNet
    13:30 PointNet-pp
    14:32 EdgeConv
    15:53 Other Methods
    16:09 Model Architecture
    17:25 Model Implementation [CODE]
    20:11 Training [CODE]
    21:05 Batch sizes in CL
    22:00 Training cont [CODE]
    22:40 Batching in CL
    23:15 Training cont [CODE]
    24:08 Embedding evaluation
    27:00 Outro
    ▬▬ Support me if you like 🌟
    ►Website: deepfindr.github.io/
    ►Support me on Patreon: bit.ly/2Wed242
    ►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
    ►Coursera: imp.i384100.net/b31QyP
    ►Link to this channel: bit.ly/3zEqL1W
    ►E-Mail: deepfindr@gmail.com
    ▬▬ My equipment 💻
    - Microphone: amzn.to/3DVqB8H
    - Microphone mount: amzn.to/3BWUcOJ
    - Monitors: amzn.to/3G2Jjgr
    - Monitor mount: amzn.to/3AWGIAY
    - Height-adjustable table: amzn.to/3aUysXC
    - Ergonomic chair: amzn.to/3phQg7r
    - PC case: amzn.to/3jdlI2Y
    - GPU: amzn.to/3AWyzwy
    - Keyboard: amzn.to/2XskWHP
    - Bluelight filter glasses: amzn.to/3pj0fK2

КОМЕНТАРІ • 32

  • @mkamp
    @mkamp Рік тому +7

    It’s earning a lot of trust how you corrected the oops from your last video. Kudos.

    • @mkamp
      @mkamp Рік тому +1

      FWIW when applying the cos sim to BOW (counts, positive numbers), the value would be between 0 and 1.

    • @DeepFindr
      @DeepFindr  Рік тому +1

      Thanks!

  • @evgenii.v
    @evgenii.v 11 місяців тому

    Very cool videos, thank you man!
    Your channel is truly underrated, keep it going!

  • @yuanluo6619
    @yuanluo6619 3 місяці тому

    Great work! Combined with the part 1, a very complete picture of how contrastive learning is presented. Thanks. It helps a lot.

  • @amirrezafarnoosh4259
    @amirrezafarnoosh4259 10 місяців тому +1

    Great tutorial! In my opinion, to enhance the embedding evaluation part, it's crucial to establish a solid baseline. One effective approach would involve applying TSNE directly to the (high-dimensional) point cloud data (e.g., after applying a simple permutation-invariant operation). By comparing these TSNE plots with the ones generated from the learned embeddings, we can effectively gauge the impact of the contrastive deep learning framework on the separation performance. This would provide a fair assessment of how well the framework has improved the data representation.

  • @divelix2666
    @divelix2666 15 днів тому

    Very useful and informative video, especially PointNet and batch size parts. Special thanks for using point cloud domain!

    • @DeepFindr
      @DeepFindr  15 днів тому

      Glad it was useful! :)

  • @hmind9836
    @hmind9836 Рік тому +1

    You're awesome! Thanks for making these videos and sharing your knowledge. I hope you keep creating this kind of content, I'll keep watching!!!

    • @DeepFindr
      @DeepFindr  Рік тому +1

      Thank you :) I'll keep them coming!

  • @lores1413
    @lores1413 Рік тому

    Thank you so much. Pleas keep positing

  • @thegimel
    @thegimel Рік тому +3

    Great followup video. I really like the way you speak simply about whatever concept you are explaining. I think it has good pedagogical value. Keep it up 👍

    • @DeepFindr
      @DeepFindr  Рік тому +1

      Thank you for the kind words! :)

  • @juikaiwang
    @juikaiwang Рік тому +1

    Keep up the good work. You are amazing. You have the talent for teaching! It would be fantastic if you can implement something like DINO!

  • @elviska5173
    @elviska5173 Рік тому +2

    Very excellent tutorials! Thank you very much for your generous help to beginners like me.

  • @abdullahziker
    @abdullahziker Рік тому

    Love the content ....

  • @mohammadnafie3327
    @mohammadnafie3327 11 місяців тому

    The quality of your videos and explanation is amazing! Actually, I am really thinking about requesting your expertise. Any work is going to be done on GNN Contrastive Learning? I am very interested into the idea of extracting invariant features from GNN embeddings? Keep up the good work! ^_^

  • @buh357
    @buh357 Рік тому

    awesome.

  • @MrXboy3x
    @MrXboy3x 6 місяців тому

    good video, but 1 thing, I do not think voxel is a sparse representation. sparse is happened when points has distance betwen them like low resolution point cloud. mesh and voxel does not have this effect!

  • @nikosspyrou3890
    @nikosspyrou3890 Рік тому

    Great explanation! In the case of semantic segmentation how can we calculate the contrastive loss for pixel embeddings?

    • @DeepFindr
      @DeepFindr  Рік тому +1

      Have you seen this explanation? towardsdatascience.com/pixel-level-dense-contrastive-learning-6558691eeb67

  • @loveislulu264
    @loveislulu264 9 місяців тому

    Can it be implemented in text data as well? IF so can you make a vid for it?

    • @loveislulu264
      @loveislulu264 9 місяців тому

      By the way the xplanation was very good, thank you

  • @eranjitkumar11
    @eranjitkumar11 Рік тому

    Thanks for the video. Will the next be a Part 3? With a Infomax (Contrastive GNN) tutorial ? ;)

    • @DeepFindr
      @DeepFindr  Рік тому

      I'm actually thinking about it :D but I wonder what additional value it adds. The only difference are augmentations and the data mode. Is there anything in particular you are interested in?

    • @eranjitkumar11
      @eranjitkumar11 Рік тому +1

      1) In short: There are many things
      2) Sorry for my short answer. ;) A bit more seriously, I'm still trying to improve my skills to go from reading papers to reproducing results. Here are video tutorials and blog posts very helpful for me. In particular, your series on GNN (a new subject for me) was very helpful (especially your "Understanding Graph Attention Networks" video, which helped me a lot with the math part of GNN). I want to work on self-supervised GNN (starting with infomax), but for that, I need a better understanding of the math part. For that, it will be very helpful to have "explained" "working code" (as you do) which I can compare with my understanding of the paper. Sorry for my rambling, but hope it increased the chances of you making part 3: Infomax tutorial. ;)

  • @zahra7619
    @zahra7619 2 місяці тому

    in section "install torch"
    what is link?

  • @NouhaShab
    @NouhaShab Рік тому

    useful video, question: why in test time no augmentations are applied?

    • @DeepFindr
      @DeepFindr  Рік тому

      Hi, augmentations are only used to introduce more variance during training (basically to generate more samples to make the model equivariant to different transformations). During test time you simply want to evaluate your real data.

    • @NouhaShab
      @NouhaShab Рік тому

      @@DeepFindrThank you for the prompt answer

  • @hussainshaik4390
    @hussainshaik4390 4 місяці тому

    Good explanation but bad example which is difficult to follow the point could. I wish you would have picked 2d images instead