Computer Vision with Hüseyin Özdemir
Computer Vision with Hüseyin Özdemir
  • 77
  • 82 406
Self-Attention
This video describes details of Scaled Dot-Product Attention, specific Self-Attention version used inside transformer architecture
In this video, animations and images except the ones taken from reference papers belong to me
References
Attention Is All You Need
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
arxiv.org/abs/1706.03762
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn,
Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer,
Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby
arxiv.org/abs/2010.11929
#machinelearning #computervision
#deeplearning #ai #aitutorial #education
#transformer #visiontransformer #vit
#selfattention #multiheadattention
#imageprocessing #datascience
#computervisionwithhuseyinozdemir
Переглядів: 245

Відео

Multi-Head Attention
Переглядів 94Місяць тому
First, Self-Attention, building block of Multi-Head Attention, is defined. Then, Multi-Head Attention is described in detail Video Contents: 00:00 Self-Attention 07:55 Multi-Head Attention In this video, animations and images except the ones taken from reference papers belong to me References Attention Is All You Need Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aida...
Comparison of CNN and ViT
Переглядів 106Місяць тому
Inductive bias is defined, CNN and ViT architectures are compared All animations and images in this video belong to me References Attention Is All You Need Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin arxiv.org/abs/1706.03762 An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale Alexey Dosovitskiy, ...
Vision Transformer
Переглядів 159Місяць тому
After the success in NLP, transformer architecture is adapted for image recognition as Vision Transformer (ViT) Video Contents: 00:00 Introduction 02:20 Extracting Embedding Vectors 05:13 Self-Attention 12:58 Multi-Head Attention 15:46 MLP 16:28 Classification Head 17:36 Comparison of CNN and ViT In this video, animations and images except the ones taken from reference papers belong to me Refer...
Diffusion Models Explained with Math From Scratch
Переглядів 9563 місяці тому
Diffusion Model is a popular Generative AI method. Stable Diffusion and OpenAI Sora are diffusion models where diffusion takes place in latent space instead of image pixel space. Video Contents: 00:38 Sampling from a Standard Gaussian Distribution 02:30 Forward Process 04:58 Noise Addition in Single Step 06:11 Variance Schedule 07:36 Reverse Process 08:32 Derivation of Variational Lower Bound 1...
Greetings
Переглядів 53211 місяців тому
Hi, my name is Hüseyin Özdemir Welcome to my channel! This channel is about Computer Vision, Deep Learning, Machine Learning and Artificial Intelligence. In each video, ideas are described step by step in full detail. If you find the videos useful, like, subscribe, share and comment.
Supervised Learning
Переглядів 72Рік тому
Subscribe To My Channel www.youtube.com/@huseyin_ozdemir?sub_confirmation=1 Video Contents: 00:00 Definition of Labeled Dataset 00:53 Supervised Learning Mechanism 03:01 Subcategories of Supervised Learning 03:44 Supervised vs. Unsupervised * Definition of Labeled Dataset * Illustration of Supervised Learning Mechanism * Subcategories of Supervised Learning * Comparison of Supervised and Unsupe...
Unsupervised Learning
Переглядів 61Рік тому
Subscribe To My Channel www.youtube.com/@huseyin_ozdemir?sub_confirmation=1 Video Contents: 00:00 Illustration of Labeled and Unlabeled Datasets 01:27 Unsupervised Learning 03:05 Applications of Unsupervised Learning 04:12 Supervised vs. Unsupervised * Illustration of Labeled and Unlabeled Datasets * Unsupervised Learning * Applications of Unsupervised Learning * Comparison of Supervised and Un...
Semi-Supervised Learning
Переглядів 73Рік тому
Subscribe To My Channel www.youtube.com/@huseyin_ozdemir?sub_confirmation=1 Video Contents: 00:00 Comparison of Supervised and Unsupervised Learning 01:08 Semi-Supervised Learning 01:45 Self-Training, a Semi-Supervised Learning example * Comparison of Supervised and Unsupervised Learning * Semi-Supervised Learning * Self-Training, a Semi-Supervised Learning example All images and animations in ...
Self-Training
Переглядів 192Рік тому
Subscribe To My Channel www.youtube.com/@huseyin_ozdemir?sub_confirmation=1 In this video, Self-Training, a Semi-Supervised Learning method is described in detail All images and animations in this video belong to me #machinelearning #computervision #deeplearning #ai #aitutorial #education #semisupervisedlearning #unlabeleddata #pseudolabel #selftraining #labeleddata #imageprocessing #datascienc...
Self-Supervised Learning
Переглядів 116Рік тому
Subscribe To My Channel www.youtube.com/@huseyin_ozdemir?sub_confirmation=1 Video Contents: 00:00 Comparison of Supervised and Unsupervised Learning 01:09 Self-Supervised Learning 01:50 Self-Supervised Learning example with Autoencoder 04:16 Pretext and Downstream Tasks 06:32 Different Types of Pretext Tasks * Comparison of Supervised and Unsupervised Learning considering input data * Self-Supe...
Autoencoder
Переглядів 49Рік тому
Subscribe To My Channel www.youtube.com/@huseyin_ozdemir?sub_confirmation=1 Video Contents: 00:00 What is Autoencoder? 00:51 Parts of Autoencoder 02:37 Information about Dataset 03:43 Network & Training 05:59 Dimensionality Reduction 07:00 Self-Supervised Learning * What is Autoencoder? * Parts of Autoencoder: Encoder, Bottleneck and Decoder * Information about Dataset Used To Train Autoencoder...
Logit and Probability
Переглядів 141Рік тому
Subscribe To My Channel www.youtube.com/@huseyin_ozdemir?sub_confirmation=1 Video Contents: 00:00 Case for Binary Classification 03:05 Case for Multi-Class Classification 05:41 Case for Multi-Label Classification * Case for Binary Classification * Case for Multi-Class Classification * Case for Multi-Label Classification All images and animations in this video belong to me #machinelearning #comp...
Binary Classification
Переглядів 113Рік тому
Subscribe To My Channel www.youtube.com/@huseyin_ozdemir?sub_confirmation=1 Video Contents: 00:00 Definition of Binary Classification 00:50 Binary Classification example 01:24 Binary Classification is a Supervised Learning Method 02:11 About Training & Inference Phases 03:44 Output Layer for Binary Classification 04:50 Sigmoid Activation * Definition of Binary Classification * Binary Classifica...
Multi-Class Classification
Переглядів 90Рік тому
Subscribe To My Channel www.youtube.com/@huseyin_ozdemir?sub_confirmation=1 Video Contents: 00:00 Definition of Multi-Class Classification 00:51 Multi-Class Classification example 01:26 Multi-Class Classification is a Supervised Learning Method 02:20 About Training & Inference Phases 03:48 Output Layer for Multi-Class Classification 04:40 Softmax Activation * Definition of Multi-Class Classific...
Multi-Label Classification
Переглядів 540Рік тому
Multi-Label Classification
Multi-Class vs. Multi-Label Classification
Переглядів 476Рік тому
Multi-Class vs. Multi-Label Classification
Loss Function
Переглядів 55Рік тому
Loss Function
Cost Function
Переглядів 50Рік тому
Cost Function
Binary Cross-Entropy Loss
Переглядів 328Рік тому
Binary Cross-Entropy Loss
Categorical Cross-Entropy Loss
Переглядів 178Рік тому
Categorical Cross-Entropy Loss
Linear Transformation
Переглядів 1,1 тис.Рік тому
Linear Transformation
Affine Transformation
Переглядів 6 тис.Рік тому
Affine Transformation
Projective Transformation
Переглядів 10 тис.Рік тому
Projective Transformation
Homogeneous Coordinates
Переглядів 2,7 тис.Рік тому
Homogeneous Coordinates
Rigid Transformation
Переглядів 493Рік тому
Rigid Transformation
Similarity Transformation
Переглядів 1,2 тис.Рік тому
Similarity Transformation
Forward and Backward Image Warping
Переглядів 4,2 тис.Рік тому
Forward and Backward Image Warping
Splatting
Переглядів 814Рік тому
Splatting
Image Rotation
Переглядів 542Рік тому
Image Rotation

КОМЕНТАРІ

  • @marufahmed3416
    @marufahmed3416 7 днів тому

    Very good visual explanation, thanks very much.

  • @talon6277
    @talon6277 14 днів тому

    Very helpful, well explained Thank you!

  • @ercancetin6002
    @ercancetin6002 Місяць тому

    Güzel çalışma

  • @ajkdrag
    @ajkdrag Місяць тому

    Can you do video on detr and yolo new models?

  • @doublesami
    @doublesami Місяць тому

    very good explanation, Could you please make a video on vmamba or Vision mamba to understand it in depth , like how selective scan 2d works etc , looking forward

  • @user-rz8qb7gm2t
    @user-rz8qb7gm2t Місяць тому

    Thanks for your detailed explanation!

  • @zaharvarfolomeev1536
    @zaharvarfolomeev1536 2 місяці тому

    Thank you! I liked your video more than anyone else on the topic of momentum.

  • @ivannasha5556
    @ivannasha5556 3 місяці тому

    Thanks! I was experimenting with IFS fractals 30+ years ago. Did not remember much and google was no help. Everyone is just listing the basic known and nobody else explains the math to make your own.

  • @arinmahapatro61
    @arinmahapatro61 5 місяців тому

    Insightful !

  • @dhirajkumarsahu999
    @dhirajkumarsahu999 7 місяців тому

    Thanks a lot

  • @gneil1985
    @gneil1985 7 місяців тому

    Great insights into the perspective transformation. Very clear explanation.

  • @sixface20
    @sixface20 7 місяців тому

    Great tutorial

  • @user-ro8kx2dc6g
    @user-ro8kx2dc6g 7 місяців тому

    Perfect presentation!

  • @thatguy5787
    @thatguy5787 8 місяців тому

    This is fantastic. Very well done.

  • @ercancetin6002
    @ercancetin6002 8 місяців тому

    Bu kadar özenli bir çalışmanın bu kadar az ilgi görmesi üzücü. Başarılar diliyorum kardeşim.

    • @huseyin_ozdemir
      @huseyin_ozdemir 8 місяців тому

      Yorumunuz için teşekkür ederim. Kanalım için yaptığım çalışmalar özelinde değil de daha geniş manasıyla bakacak olursak, hayatın bana öğrettiği şeylerden biri de her çabanın her fiilin bir karşılığı olduğu. Bazen hemen olur, bazen zaman alır. Bazen direkt olur, bazen dolaylı yollardan.

  • @mehmetozkan1075
    @mehmetozkan1075 9 місяців тому

    ABSOLUTELY GOOD JOB. THANK YOU SO MUCH

  • @mehmetozkan1075
    @mehmetozkan1075 9 місяців тому

    It's great that you added this lesson as well. Thanks a lot.

    • @huseyin_ozdemir
      @huseyin_ozdemir 9 місяців тому

      Thank You. I think YOLOv1, YOLOv2 and YOLOv3 are important to understand how to address object detection in single pass formulating it as a regression problem.

  • @mehmetozkan1075
    @mehmetozkan1075 9 місяців тому

    It is really a very simple and understandable series. The series is easy to understand and follow. It would be great if you could include courses on OpenCV, advanced computer vision, and Kaggle project solutions. Thank you for all your hard work.

  • @krimafarjallah7553
    @krimafarjallah7553 9 місяців тому

    💯🤍

  • @denischikita
    @denischikita 10 місяців тому

    I didn't got. How input depth became from 3 to 32?

    • @huseyin_ozdemir
      @huseyin_ozdemir 10 місяців тому

      Those are two different examples. In the first one, at 09:22 of the video, an RGB image is convolved with a 3×3 filter. Since RGB image has 3 channels, convolution filter should also have 3 channels. This is a typical filtering operation in an image processing application. The second example, at 12:08 of the video, is more generic, a convolution operation at a convolutional layer is illustrated. That's why, in the video, it's written "Let our input image depth be 32".

  • @denischikita
    @denischikita 10 місяців тому

    Thank you. I resect your original attitude to teach such complex topic. It helped me to place right things to my mind.

  • @vivekrai1974
    @vivekrai1974 10 місяців тому

    Very Informative Video. I see that you have covered various topics like mathematics of transformation, supervised learning etc. in your various videos. If you create playlists, it would be easier for the viewers.

  • @mfatihaydogdu7
    @mfatihaydogdu7 11 місяців тому

    It would be very helpful to generate playlists .

  • @muhittinselcukgoksu1327
    @muhittinselcukgoksu1327 11 місяців тому

    I congratulate your Digital Image Processing videos. When commercial products are everywhere , then detailed and explanatory videos are easily accessible datum. Thank you so much.

  • @dinezeazy
    @dinezeazy Рік тому

    Man i really love how you are fusing different topics in single video!! Then have a separate topic for that particular video. This is great.

    • @huseyin_ozdemir
      @huseyin_ozdemir Рік тому

      Glad you like the videos. Thanks for the comment.

  • @milanm4772
    @milanm4772 Рік тому

    Nicely. Best explained.

  • @dinezeazy
    @dinezeazy Рік тому

    This is amazing, please do more of these, camera calibration also with example and from there what and can be achieved using the calibration like solving parallax problem, estimating object distance etc. With you kind of slow and steady explanation everyone will be able to understand.

  • @mizzonimirko
    @mizzonimirko Рік тому

    I do not fully understand how jt works honestly. Given a batch, the output of that hidden layer should be dimension_batch* dimension _output? It follows that mean / variance shouldn't be vectors?

    • @huseyin_ozdemir
      @huseyin_ozdemir Рік тому

      Hi, batch normalization can be confusing at first glance. Never mind. Let's say we have a fully connected layer with n neurons. If batch size is m, then each neuron outputs m values for 1 batch of inputs. Mean and variance for that neuron for that batch are computed using those m outputs as described in 09:01 of the video. So mean and variance are scalars and are computed for each batch during training. And one important thing to note is that while computing mean and variance for 1 neuron, only outputs of that neuron are used.

  • @irshadirshu0722
    @irshadirshu0722 Рік тому

    Nice explanation ❤

  • @villagelifebangladesh9636

    i dont hear any audio...dont know why

    • @huseyin_ozdemir
      @huseyin_ozdemir Рік тому

      I prepared some videos without voiceover. But, that's not an issue :) Each video is fully self-contained.

  • @srihithbharadwaj3421
    @srihithbharadwaj3421 Рік тому

    does forward warping need the depth information

  • @wolfgangbierling
    @wolfgangbierling Рік тому

    Great work! Thank you for this clear explanation!

  • @cathycai9167
    @cathycai9167 Рік тому

    thank you for such clear video! It really saved me :)

  • @z3515535
    @z3515535 Рік тому

    This is a good video. I am currently searching on implementation of deconvolution using tensorflow. Did you use tensorflow for your implementation? If so, can you share the code?

  • @FelLoss0
    @FelLoss0 Рік тому

    Silent video?

    • @huseyin_ozdemir
      @huseyin_ozdemir Рік тому

      When I first started my channel, I prepared some videos without voiceover. But, I can assure you, those videos, too, include all necessary information and detail as text, diagrams and images to understand the related concepts.

  • @waterspray5743
    @waterspray5743 Рік тому

    Thank you for making everything concise and straight to the point.

    • @huseyin_ozdemir
      @huseyin_ozdemir Рік тому

      Thank You for your comment. Glad you liked the video.

  • @aaryannakhat1004
    @aaryannakhat1004 Рік тому

    Thanks a lot! Was facing difficulty in understanding how mini-batch standard deviation helps prevent mode collapse until I saw this video! Really appreciate it! Great work!

  • @dyyno5578
    @dyyno5578 Рік тому

    thank you very much for the clear explanation!

  • @mohammadyahya78
    @mohammadyahya78 Рік тому

    Third question please, at 5:13, what do you mean by modulation weights please?

  • @mohammadyahya78
    @mohammadyahya78 Рік тому

    Thank you again. You mentioned at 4:10 that there is a dimension is reduced by reductuon ratio r

    • @huseyin_ozdemir
      @huseyin_ozdemir Рік тому

      Reduction ratio r is used to create a bottleneck. This way, network is forced to learn which channels are important. Then unimportant channels are suppressed scaling them with modulation weights.

  • @mohammadyahya78
    @mohammadyahya78 Рік тому

    Thank you very much. May I know what is the modulation weight please at 2:11?

    • @huseyin_ozdemir
      @huseyin_ozdemir Рік тому

      Modulation weight scales a channel depending on the importance of the channel. So following layers focus on important information.

  • @muhtasirimran
    @muhtasirimran Рік тому

    Any link to understand why 2nd part works?

  • @AJ-et3vf
    @AJ-et3vf Рік тому

    Awesome video. Thank you

  • @balajiharidass4997
    @balajiharidass4997 Рік тому

    Thanks for a great video. It is beautiful to see the clarity of info without Audio. Awesome! Love your other videos too :) Keep going...

  • @sivuyilesifuba
    @sivuyilesifuba Рік тому

    nice

  • @speedbird7587
    @speedbird7587 Рік тому

    excellent explanation thanks

  • @kimbring2727
    @kimbring2727 Рік тому

    Thank you for the detailed instruction. I am finding the reason why my model starts to work after adding the Layer Normalization at the front part of the network. It is interesting that it has also a trainable variable. I just thought it is a static layer like a Softmax and ReLu 😹

  • @FarooqComputerVision
    @FarooqComputerVision Рік тому

    I would like to appreciate your work. Your explanation of different Computer Vision algorithms is really amazing. Keep it up. Thank you, Sir.

  • @FarooqComputerVision
    @FarooqComputerVision Рік тому

    Sorry, no voice is there.

    • @huseyin_ozdemir
      @huseyin_ozdemir Рік тому

      I prepared some of my videos without voiceover. In fact, with or without voiceover, it does not make any difference. In either case, I'm trying to design the videos as much explanatory as possible. Everything needed is included as text, images and animations.

  • @yb801
    @yb801 Рік тому

    Thanks for the explanation. I am still confused with the value and saturation, what's the difference between them?

    • @huseyin_ozdemir
      @huseyin_ozdemir Рік тому

      Thanks for your comment. Value is related to brightness. If value decreases, the result is darker. If it increases, the result looks brighter. Saturation is related to purity of the color. If saturation decreases, resultant color looks as if it is mixed with gray and the brightness of gray depends on the value of color. If value is very low, no matter what the saturation is, the result looks dark. If value is very high, no matter what the saturation is, the result looks bright. You can easily comprehend these concepts inspecting HSV cylinder.