Lecture 7: Convolutional Networks

Поділитися
Вставка
  • Опубліковано 31 тра 2024
  • Lecture 7 moves from fully-connected to convolutional networks by introducing new computational primitives that respect the spatial structure of 2D image data. We discuss convolution layers, which slide a learnable filter over the input data. We discuss pooling layers, which spatially downsample their input data. We then look at normalization layers including batch, layer, and instance normalization, which normalize their input data along different axes and improve training speed.
    Slides: myumi.ch/K43Zy
    _________________________________________________________________________________________________
    Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification and object detection. Recent developments in neural network approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This course is a deep dive into details of neural-network based deep learning methods for computer vision. During this course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. We will cover learning algorithms, neural network architectures, and practical engineering tricks for training and fine-tuning networks for visual recognition tasks.
    Course Website: myumi.ch/Bo9Ng
    Instructor: Justin Johnson myumi.ch/QA8Pg

КОМЕНТАРІ • 28

  • @jh97jjjj
    @jh97jjjj 10 місяців тому +6

    Great lecture for free. Thank you Michigan University and professor Justin.

  • @temurochilov
    @temurochilov 2 роки тому +2

    Thank you I found answers to the questions that I have been looking for long time

  • @faranakkarimpour3794
    @faranakkarimpour3794 Рік тому +3

    Thank you for the great course.

  • @jijie133
    @jijie133 3 роки тому +1

    Great.

  • @tatianabellagio3107
    @tatianabellagio3107 3 роки тому +9

    Amazing!
    Pd: Although I am sorry for the guy with the coughing attack...........

    • @kobic8
      @kobic8 Рік тому +1

      yeah, kinda disturbed me to concentrate. 2019 it was right before covid striked the world hahah
      😷

  • @alokoraon1475
    @alokoraon1475 3 місяці тому

    I have this great package for my university course.❤

  • @eurekad2070
    @eurekad2070 2 роки тому +1

    Thank you for exellent video! But I have a question here, at 1:05:42, after layer normalization, every sample in x has shape 1xD, while μ has shape Nx1. How do you perform the subtraction x-μ?

    • @yicheng1991
      @yicheng1991 2 роки тому +1

      I wonder if gamma and beta with 1 x D is a typo? If it should be N x 1? If it is not a typo, doing the subtraction is just using the broadcasting mechanism like in numpy.

    • @eurekad2070
      @eurekad2070 2 роки тому +1

      @@yicheng1991 Broadcasting mechanism makes sense. Thank you.

  • @puranjitsingh1782
    @puranjitsingh1782 2 роки тому

    Thanks for an excellent video Justin!! I had a quick question on how does the conv. filters change the 3d input into a 2d output

    • @sharath_9246
      @sharath_9246 2 роки тому +1

      When you dot product 3d image example(3*32*32) with filter(3*5*5) gives a 2d feature map (28*28) just bcoz of the dot product operation between image and filter

  • @rajivb9493
    @rajivb9493 3 роки тому +1

    at 35:09, the expression for output in case of stride convolution is (W - K + 2P)/S +1...for W=7, K=3, P = (K-1)/2 = 1 & S=2 we get output as (7 - 3 + 2*1)/2 + 1 = 3 +1 = 4 ...however, the slide shows the output as 3x3 instead of 4x4 at the right hand corner... is it correct..?

    • @bibiworm
      @bibiworm 2 роки тому

      I have the same question.

    • @krishnatibrewal5546
      @krishnatibrewal5546 2 роки тому +1

      both are different situations, the calculation is done without padding whereas the formula is written considering padding

    • @rajivb9493
      @rajivb9493 2 роки тому

      @@krishnatibrewal5546 ... thanks a lot, yes you're right..

    • @bibiworm
      @bibiworm 2 роки тому

      @@krishnatibrewal5546 thanks.

  • @rajivb9493
    @rajivb9493 3 роки тому

    In Batch Normalization during Test time at 59:52, what are the averaging equations used to average Mean & Std deviation, sigma ..during the lecture some mention is made of exponential mean of Mean vectors & Sigma vectors...please suggest.

  • @hasan0770816268
    @hasan0770816268 3 роки тому +1

    33:10 stride
    53:00 batch normalization

  • @intoeleven
    @intoeleven 2 роки тому

    why they don't use batch norm + layer norm together?

  • @bibiworm
    @bibiworm 2 роки тому

    1:01:30 what did he mean by “fusing BN with FC layer or Conv layer”?

    • @krishnatibrewal5546
      @krishnatibrewal5546 2 роки тому +1

      You can have conv-pool-batchnorm-relu or fc- bn- relu , batch norm can be induced between any layer of the network

    • @bibiworm
      @bibiworm 2 роки тому

      @@krishnatibrewal5546 thanks a lot!

    • @yahavx
      @yahavx 11 місяців тому

      Because both are linear operators, then you can simply concat them after training (think of them as matrices A and B, in test time you multiply C=A*B and you put that instead of both)

  • @ibrexg
    @ibrexg 6 місяців тому

    Well don! here is more explanation to normalization: ua-cam.com/video/sxEqtjLC0aM/v-deo.html&ab_channel=NormalizedNerd

  • @magic4266
    @magic4266 Рік тому

    sounds like someone was building duplo the entire lecture

    • @brendawilliams8062
      @brendawilliams8062 9 місяців тому

      Thomas the tank engine?

    • @park5605
      @park5605 21 день тому

      ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem .
      ahem ahem.
      ahe ahe he he HUUUJUMMMMMMMMMMMM