Kapil Sachdeva
Kapil Sachdeva
  • 41
  • 417 495
Eliminate Grid Sensitivity | Bag of Freebies (Yolov4) | Essentials of Object Detection
This tutorial explains a training technique that helps in dealing with objects whose center lies on the boundaries of the grid cell in the feature map.
This technique falls under the "Bag of Freebies" category as it adds almost zero FLOPS (additional computation) to achieve higher accuracy during test time.
Pre-requisite:
Bounding Box Prediction
ua-cam.com/video/-nLJyxhl8bY/v-deo.htmlsi=Fv7Bfgxd1I-atZF0
Important links:
Paper - arxiv.org/abs/2004.10934
Threads with a lot of discussion on this subject:
github.com/AlexeyAB/darknet/issues/3293
github.com/ultralytics/yolov5/issues/528
Переглядів: 1 160

Відео

GIoU vs DIoU vs CIoU | Losses | Essentials of Object Detection
Переглядів 4,6 тис.Рік тому
This tutorial provides an in-depth and visual explanation of the three Bounding Box loss functions. Other than the loss functions you would be able to learn about computing per sample gradients using the new Pytorch API. Resources: Colab notebook colab.research.google.com/drive/1GAXn6tbd7rKZ1iuUK1pIom_R9rTH1eVU?usp=sharing Repo with results of training using different loss functions github.com/...
Feature Pyramid Network | Neck | Essentials of Object Detection
Переглядів 13 тис.Рік тому
This tutorial explains the purpose of the neck component in the object detection neural networks. In this video, I explain the architecture that was specified in Feature Pyramid Network paper. Link to the paper [Feature Pyramid Network for object detection] arxiv.org/abs/1612.03144 The code snippets and full module implementation can be found in this colab notebook: colab.research.google.com/dr...
Bounding Box Prediction | Yolo | Essentials of Object Detection
Переглядів 9 тис.Рік тому
This tutorial explains finer details about the bounding box coordinate predictions using visual cues.
Anchor Boxes | Essentials of Object Detection
Переглядів 11 тис.Рік тому
This tutorial highlights challenges in object detection training, especially how to associate a predicted box with the ground truth box. It then shows and explains the need for injecting some domain/human knowledge as a starting point for the predicted box.
Intersection Over Union (IoU) | Essentials of Object Detection
Переглядів 4,3 тис.Рік тому
This tutorial explains how to compute the similarity between 2 bounding boxes using Jaccard Index, commonly known as Intersection over Union in the field of object detection.
A Better Detection Head | Essentials of Object Detection
Переглядів 2,3 тис.Рік тому
This is a continuation of the Detection Head tutorial that explains how to write the code such that you can avoid ugly indexing into the tensors and also have more maintainable and extensible components. It would beneficial to first watch the DetectionHead tutorial Link to the DetectionHead tutorial: ua-cam.com/video/U6rpkdVm21E/v-deo.html Link to the Google Colab notebook: colab.research.googl...
Detection Head | Essentials of Object Detection
Переглядів 5 тис.Рік тому
This tutorial shows you how to make the detection head(s) that takes features from the backbone or the neck. Link to the Google Colab notebook: colab.research.google.com/drive/1KwmWRAsZPBK6G4zQ6JPAbfWEFulVTtRI?usp=sharing
Reshape,Permute,Squeeze,Unsqueeze made simple using einops | The Gems
Переглядів 5 тис.Рік тому
This tutorial introduces to you a fantastic library called einops. Einops provides a consistent API to do reshape, permute, squeeze, unsqueeze and enhances the readabilty of your tensor operations. einops.rocks/ Google colab notebook that has examples shown in the tutorial: colab.research.google.com/drive/1aWZpF11z28KlgJZRz8-yE0kfdLCcY2d3?usp=sharing
Image & Bounding Box Augmentation using Albumentations | Essentials of Object Detection
Переглядів 7 тис.Рік тому
This tutorial explains how to do image pre-processing and data augmentation using Albumentations library. Google Colab notebook: colab.research.google.com/drive/1FoQKHuYuuKNyDLJD35-diXW4435DTbJp?usp=sharing
Bounding Box Formats | Essentials of Object Detection
Переглядів 7 тис.Рік тому
This tutorial goes over various bounding box formats used in Object Detection. Link the Google Colab notebook: colab.research.google.com/drive/1GQTmjBuixxo_67WbvwNp2PdCEEsheE9s?usp=sharing
Object Detection introduction and an overview | Essentials of Object Detection
Переглядів 9 тис.Рік тому
This is an introductory video on object detection which is a computer vision task to localize and identify objects in images. Notes - * I have intentionally not talked about 2-stage detectors. * There will be follow-up tutorials that dedicated to individual concepts
Softmax (with Temperature) | Essentials of ML
Переглядів 3,7 тис.2 роки тому
A visual explanation of why, what, and how of softmax function. Also as a bonus is explained the notion of temperature.
Grouped Convolution - Visually Explained + PyTorch/numpy code | Essentials of ML
Переглядів 4,8 тис.2 роки тому
In this tutorial, the need & mechanics behind Grouped Convolution is explained with visual cues. Then the understanding is validated by looking at the weights generated by the PyTorch Conv layer and by performing the operations manually using NumPy. Google colab notebook: colab.research.google.com/drive/1AUrTK622287NaKHij0YqOCvcdi6gVxhc?usp=sharing Playlist: ua-cam.com/video/6SizUUfY3Qo/v-deo.h...
Convolution, Kernels and Filters - Visually Explained + PyTorch/numpy code | Essentials of ML
Переглядів 2,1 тис.2 роки тому
This tutorial explains (provide proofs using code) the components & operations in a convolutional layer in neural networks. The difference between Kernel and Filter is clarified as well. The tutorial also points out that not all kernels convolve/correlate with all input channels. This seems to be a common misunderstanding for many people. Hopefully, this visual and code example can help show th...
Matching patterns using Cross-Correlation | Essentials of ML
Переглядів 1,2 тис.2 роки тому
Matching patterns using Cross-Correlation | Essentials of ML
Let's make the Correlation Machine | Essentials of ML
Переглядів 1,8 тис.2 роки тому
Let's make the Correlation Machine | Essentials of ML
Reparameterization Trick - WHY & BUILDING BLOCKS EXPLAINED!
Переглядів 11 тис.2 роки тому
Reparameterization Trick - WHY & BUILDING BLOCKS EXPLAINED!
Variational Autoencoder - VISUALLY EXPLAINED!
Переглядів 13 тис.2 роки тому
Variational Autoencoder - VISUALLY EXPLAINED!
Probabilistic Programming - FOUNDATIONS & COMPREHENSIVE REVIEW!
Переглядів 5 тис.3 роки тому
Probabilistic Programming - FOUNDATIONS & COMPREHENSIVE REVIEW!
Metropolis-Hastings - VISUALLY EXPLAINED!
Переглядів 34 тис.3 роки тому
Metropolis-Hastings - VISUALLY EXPLAINED!
Markov Chains - VISUALLY EXPLAINED + History!
Переглядів 14 тис.3 роки тому
Markov Chains - VISUALLY EXPLAINED History!
Monte Carlo Methods - VISUALLY EXPLAINED!
Переглядів 4,5 тис.3 роки тому
Monte Carlo Methods - VISUALLY EXPLAINED!
Conjugate Prior - Use & Limitations CLEARLY EXPLAINED!
Переглядів 3,4 тис.3 роки тому
Conjugate Prior - Use & Limitations CLEARLY EXPLAINED!
How to Read & Make Graphical Models?
Переглядів 3,1 тис.3 роки тому
How to Read & Make Graphical Models?
Posterior Predictive Distribution - Proper Bayesian Treatment!
Переглядів 6 тис.3 роки тому
Posterior Predictive Distribution - Proper Bayesian Treatment!
Sum Rule, Product Rule, Joint & Marginal Probability - CLEARLY EXPLAINED with EXAMPLES!
Переглядів 6 тис.3 роки тому
Sum Rule, Product Rule, Joint & Marginal Probability - CLEARLY EXPLAINED with EXAMPLES!
Noise-Contrastive Estimation - CLEARLY EXPLAINED!
Переглядів 11 тис.3 роки тому
Noise-Contrastive Estimation - CLEARLY EXPLAINED!
Bayesian Curve Fitting - Your First Baby Steps!
Переглядів 7 тис.3 роки тому
Bayesian Curve Fitting - Your First Baby Steps!
Maximum Likelihood Estimation - THINK PROBABILITY FIRST!
Переглядів 7 тис.3 роки тому
Maximum Likelihood Estimation - THINK PROBABILITY FIRST!

КОМЕНТАРІ

  • @DorisCorey-j7i
    @DorisCorey-j7i 6 годин тому

    Hernandez Betty Lewis Kenneth Gonzalez Christopher

  • @TameraSweet-n3t
    @TameraSweet-n3t День тому

    Haley Corner

  • @SM-mj5np
    @SM-mj5np 4 дні тому

    You're awesome.

  • @DorisCorey-j7i
    @DorisCorey-j7i 10 днів тому

    Moore Kevin Moore Sharon Lewis Richard

  • @Blu3B33r
    @Blu3B33r 11 днів тому

    5:57 gave me the aha!-moment. Thank you so much!

  • @논리학-w4t
    @논리학-w4t 11 днів тому

    선생님 감사합니다. 이 영상을 보고 MCMC를 깨우쳤습니다. 오래오래 건강하십시오.

  • @snehamishra5275
    @snehamishra5275 12 днів тому

    So the bounding box comes with labeled data ?.....or we ourself are creating bounding box

  • @JagadesanGanesan
    @JagadesanGanesan 14 днів тому

    What a clear explanation ! A gem.

  • @mechros4460
    @mechros4460 17 днів тому

    Exactly what I was looking for, thank you!

  • @somashreechakraborty1129
    @somashreechakraborty1129 20 днів тому

    Brilliant explanation! Thank you so much!

  • @alexandrkazda7071
    @alexandrkazda7071 20 днів тому

    Thank you, the tutorial helped me a lot to get started with Einops.

  • @III.Jennifer
    @III.Jennifer 20 днів тому

    209 Lisandro Ridge

  • @GoldYvonne-r9o
    @GoldYvonne-r9o 21 день тому

    Hernandez Michael Taylor Donald Walker Richard

  • @EraRyba
    @EraRyba 22 дні тому

    8831 Osvaldo Heights

  • @danrleidiegues4800
    @danrleidiegues4800 26 днів тому

    Excellent explanation. Please, continue doing that.

  • @MlEnthusiast-bz2ky
    @MlEnthusiast-bz2ky 27 днів тому

    when we were calculating Pr(x>5) what is the role of h(x) here ? Cant we just use p(x)

  • @MichelleMoore-l2c
    @MichelleMoore-l2c 29 днів тому

    Pagac Road

  • @AlixChace-x7d
    @AlixChace-x7d Місяць тому

    I have a question if it is possible for the sum of probabilities for future state to be greater than 1 as in the case of s3 at 14:04 in video...? It seems it should sum to 1 always.

  • @ianhowe8881
    @ianhowe8881 Місяць тому

    Incredible explanatory skills!

  • @deeplearningexplained
    @deeplearningexplained Місяць тому

    awesome explanation!

  • @gender121
    @gender121 Місяць тому

    Can you please guide me whether weight vector is column vector or row vector. It is creating confusion in multiplication. Thanks in advance for the great series.

  • @SgheGejsj
    @SgheGejsj Місяць тому

    Wilson Jose Lewis Matthew Smith Matthew

  • @vivekpokharel4731
    @vivekpokharel4731 Місяць тому

    Thank you so much.

  • @LoisStewart-t6g
    @LoisStewart-t6g Місяць тому

    Thompson Cynthia Martin Frank Brown Jason

  • @rocksbox156
    @rocksbox156 Місяць тому

    Awesome video. Is there any intuition on why we are using reverse KL as opposed to forward KL?

  • @abbudeh
    @abbudeh Місяць тому

    How do we evaluate the target function f, if we assume that it is not known and we want to discover it?

  • @StudentsThough
    @StudentsThough Місяць тому

    Garcia Larry Lewis Charles Hernandez Carol

  • @nkapila6
    @nkapila6 Місяць тому

    Have been using Chris Bishop's new DL book and he reuses the same figure from PRML. Thanks for your video, the general equations are crystal clear now! ❤

  • @Charmander36023
    @Charmander36023 Місяць тому

    Thank you for your work, you are a very talented and valuable teacher

  • @schlast8311
    @schlast8311 Місяць тому

    then why not just use f(x), let c*g be a straight line equal to the maximum of f(x)

  • @미비된동작-p4g
    @미비된동작-p4g Місяць тому

    Amazing!!

  • @FinancialYaweli
    @FinancialYaweli Місяць тому

    Walker William Moore Patricia Perez Anthony

  • @NurettinOzcelik
    @NurettinOzcelik Місяць тому

    Very good explanation of Kalman filter, thanks for your time and work for that video.

  • @kamikamen_official
    @kamikamen_official Місяць тому

    Damn, einops is nice.

  • @johnjohnston3815
    @johnjohnston3815 Місяць тому

    I just found your channel today and I am glad I did! Great stuff

  • @mostafahamidifard6427
    @mostafahamidifard6427 Місяць тому

    Comparing to other videos, this one's fantastic.

  • @YT-yt-yt-3
    @YT-yt-yt-3 Місяць тому

    Still not very clear to me why that hasting generalization is required? It would have been better if you had explicitly pointed out the problem statement with metropolis that requires generalization solution.

  • @YT-yt-yt-3
    @YT-yt-yt-3 Місяць тому

    What is t1,t2..tn.. are these different instance of target variable? If so why each has different distribution. Are you assuming these categorical target variable and each has it own distribution? This part is confusing me in all the videos.

  • @YT-yt-yt-3
    @YT-yt-yt-3 Місяць тому

    P(w|x) - what is x mean her exactly. Probability of weight given different data within training or different training set or something else?

  • @dmgeo
    @dmgeo 2 місяці тому

    How is this different from U-net? I think they're pretty similar if you think that in the U-net you're going down in the encoder, up in the decoder and sideways with the skip connections. It's like an upside-down U-net

  • @thomasedison-gm8ix
    @thomasedison-gm8ix 2 місяці тому

    Professor Kaiming He is the GOD of Deep Residual Networks .

  • @era_ali
    @era_ali 2 місяці тому

    How can we use these (predictions_box, predictions_obj, predictions_cls) information from the decoder and create visualizations on input image ?

  • @sasori3897
    @sasori3897 2 місяці тому

    Thank a lot, this was an amazing explanation! just a question. Why do we need anchor boxes to be a pre-set value when we can just use the original bbox values? Like we use the center of the original bbox and its W and H values. I dont understand this

  • @abcdefghijklmnop754
    @abcdefghijklmnop754 2 місяці тому

    This ground-up approach is excellent 🙂Thank you for explaining...

  • @younique9710
    @younique9710 2 місяці тому

    Thank you for posting this great video! At 3:53, why did you use a "squared" Euclidean distance, instead of an Euclidean distance? I wonder if you use an Euclidean distance, the properties of the "squared" Euclidean distance are the same?

  • @Bbdu75yg
    @Bbdu75yg 2 місяці тому

    Amazing !

  • @lewisclifton1892
    @lewisclifton1892 2 місяці тому

    Very intuitive explanation. Thank you.

  • @kevon217
    @kevon217 2 місяці тому

    Excellent overview. Highly appreciated.

  • @UUalead
    @UUalead 2 місяці тому

    Amazing, thx a lot.

  • @jeexika3207
    @jeexika3207 2 місяці тому

    What a wonderful explanation