Vision Transformer Basics

Поділитися
Вставка
  • Опубліковано 15 тра 2024
  • An introduction to the use of transformers in Computer vision.
    Timestamps:
    00:00 - Vision Transformer Basics
    01:06 - Why Care about Neural Network Architectures?
    02:40 - Attention is all you need
    03:56 - What is a Transformer?
    05:16 - ViT: Vision Transformer (Encoder-Only)
    06:50 - Transformer Encoder
    08:04 - Single-Head Attention
    11:45 - Multi-Head Attention
    13:36 - Multi-Layer Perceptron
    14:45 - Residual Connections
    16:31 - LayerNorm
    18:14 - Position Embeddings
    20:25 - Cross/Causal Attention
    22:14 - Scaling Up
    23:03 - Scaling Up Further
    23:34 - What factors are enabling effective further scaling?
    24:29 - The importance of scale
    26:04 - Transformer scaling laws for natural language
    27:00 - Transformer scaling laws for natural language (cont.)
    27:54 - Scaling Vision Transformer
    29:44 - Vision Transformer and Learned Locality
    Topics: #computervision #ai #introduction
    Notes:
    This lecture was given as part of the 2022/2023 4F12 course at the University of Cambridge.
    It is an update to a previous lecture, which can be found here: • Neural network archite...
    Links:
    Slides (pdf): samuelalbanie.com/files/diges...
    References for papers mentioned in the video can be found at
    samuelalbanie.com/digests/2023...
    For related content:
    - Twitter: / samuelalbanie
    - personal webpage: samuelalbanie.com/
    - UA-cam: / @samuelalbanie1

КОМЕНТАРІ • 31

  • @rldp
    @rldp 4 місяці тому +23

    This is one of the best explanations of not just ViT, but transformers in general that I have watched. Excellent video

  • @whale27
    @whale27 5 місяців тому +14

    Unbelievable quality. Happy to be here before this channel blows up.

  • @capsbr2100
    @capsbr2100 2 місяці тому +6

    Goodness, what a remarkable video. This is by far the best explanation video I have watched about vision transformers.

  • @siddhantshah1271
    @siddhantshah1271 3 місяці тому +5

    This is one of the cleanest explanation of ViTs I have come across. Amazing work Samuel! Inspiring.

  • @continuallearning8366
    @continuallearning8366 5 місяців тому +5

    Excellent video! Honored to be here before it goes viral 🙏🏾

  • @jesusalpaca7170
    @jesusalpaca7170 Місяць тому +1

    for a beginner like me, I would say, this is the introduce video that we were waiting for :')

  • @user-iy6gq8yd3p
    @user-iy6gq8yd3p 5 місяців тому +5

    Thank you for making this wonderful video. So clear! Please continue your awesome video work!

  • @gnorts_mr_alien
    @gnorts_mr_alien 21 день тому

    man, what a video. thank you!

  • @abhimanyuyadav2685
    @abhimanyuyadav2685 5 місяців тому +2

    Your weekly ai news was really useful
    Please bring it back

  • @PotatoKaboom
    @PotatoKaboom 5 місяців тому +4

    I've held guest lectures on the inner workings of transformers myself, but I still learned a bunch from this! Everything after 22:15 was very exciting to watch, very well presented and easy to understand! Very well done, I dubscribed for more :)

  • @EigenA
    @EigenA Місяць тому

    Great work!

  • @zainbaloch5541
    @zainbaloch5541 29 днів тому

    Thank you so much!

  • @tomrichter9021
    @tomrichter9021 2 місяці тому

    Great video

  • @minute_machine_learning5362
    @minute_machine_learning5362 8 днів тому

    great explanation

  • @thecheekychinaman6713
    @thecheekychinaman6713 2 місяці тому

    I was studying up on Transformers and ViTs half a year ago, and recently checked back to find this (to my surprise). Great clear explanations, can tell CAML is in great hands!

  • @sbdzdz
    @sbdzdz 5 місяців тому +2

    Very well presented!

  • @rmmajor
    @rmmajor Місяць тому

    That is a masterpiece of a video! Many thanks for your work!

  • @soylentpink7845
    @soylentpink7845 5 місяців тому +2

    Very good video - contents & it’s presentation!

  • @mattsong6875
    @mattsong6875 5 місяців тому +2

    Thanks for such a informative and educational video

  • @geomanisgod
    @geomanisgod 2 місяці тому

    A+++ quality from other planets.

  • @vil9386
    @vil9386 3 місяці тому

    Wow, this video helped me a lot in understanding Attention and ViT. Packed with all the logics needed to design a solution using the latest as of this day.

  • @amoghjain
    @amoghjain 5 місяців тому +2

    Thank you so very much for sharing your insights and intuition behind soooo many concepts.

  • @flamboyanta4993
    @flamboyanta4993 5 місяців тому +2

    Excellent and clearly communicated. Thanks.
    question in 20:05 when discssing positional embeddings, the legend of the waves says dim 4,....dim 7. Here, does dim refer to the length of the pathch D? as in, we'll get as many sine waves as D dims ?

  • @flamboyanta4993
    @flamboyanta4993 5 місяців тому +1

    Another question:
    in 30:00 discussing how early attention layers tend to focus on local features and deeper ones on more global features of the input. I didn't understand the significance of the x-axis (sorted attention head). is this just a count of how many attention head there are in the respective block? Which suggests that in the large data regime, even early attention blocks with 14+ heads will also tend to observe the features globally? Is this correct?
    And thank you in advance!

  • @miraclemaxicl
    @miraclemaxicl Місяць тому

    More Compute Is All You Need

  • @iez
    @iez 2 місяці тому

    any ViTs that are open source?

  • @user-fv5oj4qk1l
    @user-fv5oj4qk1l 4 місяці тому +2

    🎯 Key Takeaways for quick navigation:
    00:00 🧠 *The Evolution of AI and Computer Vision*
    - General methods leveraging computation prove most effective in AI development.
    - Evolution from handcrafted features to Convolutional Neural Networks (CNNs) and then to Transformers, showcasing a reduction in inductive biases and an increase in data-driven approaches.
    01:09 🤖 *Neural Network Architectures*
    - Importance of network architecture in building intelligent machines.
    - Distinction between network architecture and network parameters, focusing on resource limitations and efficient design.
    02:32 💡 *Introduction to Transformers*
    - Transformers' dominance in AI, initially in Natural Language Processing (NLP) and then in Computer Vision.
    - Discussion on why Transformers took time to transition from NLP to Computer Vision.
    03:57 🌐 *Understanding Transformers: Encoder and Decoder*
    - Explanation of the Transformer architecture with its encoder and decoder components.
    - Different variants of Transformers: Encoder-only, Decoder-only, and Encoder-Decoder architectures.
    05:33 🔍 *Applying Transformers to Computer Vision*
    - Vision Transformers (ViT) process images by slicing them into patches, using position embeddings and Transformer encoders.
    - The methodology of transforming images into a sequence of embeddings for the Transformer encoder.
    07:08 🔗 *Multi-Head Attention in Transformers*
    - Detailed explanation of the multi-head attention mechanism in Transformers.
    - Role of queries, keys, and values in facilitating communication between different embeddings.
    09:12 🧩 *Transformer Encoder Blocks and Scaling*
    - The structure and function of Transformer encoder blocks, including multi-head attention and MLP.
    - Importance of residual connections and layer normalization in optimizing Transformer models.
    11:05 🚀 *Scaling and Hardware Influence in AI*
    - The impact of scaling and hardware advancements on Transformer model performance.
    - Discussion on the exponential increase in computational resources for training large models.
    13:50 🛠 *MLP and Optimization in Transformers*
    - Role of the multi-layer perceptron (MLP) in Transformer architecture for independent processing of embeddings.
    - Importance of non-linearities like ReLU and GELU in Transformer models.
    15:00 ⚙️ *Residual Connections and Layer Normalization*
    - Implementation and significance of residual connections and layer normalization in Transformers.
    - These components facilitate gradient flow and stable learning in deep network training.
    17:05 🌐 *Positional Embeddings in Transformers*
    - Explanation of positional embeddings in Transformers, necessary for maintaining spatial information in sequences.
    - Different methods of implementing positional embeddings in Transformer models.
    19:27 🔄 *Cross Attention and Causal Attention in Transformers*
    - Discussion of
    Made with HARPA AI

  • @capsbr2100
    @capsbr2100 2 місяці тому

    So for someone approaching this now, working on resource-constrained devices, both for training and inference, it makes more sense to just stick to CNNs?