Lecture 02 - Filtering

Поділитися
Вставка
  • Опубліковано 6 лют 2025

КОМЕНТАРІ • 82

  • @deepakkumarshukla
    @deepakkumarshukla 3 роки тому

    Some one asks about sigma and instructor explains it without any reproach...this is rare humility. Instructor is no doubt well experienced is immensely knowledgeable so as to be able to explain the concepts in lucid terms.

  • @chspx
    @chspx 9 років тому +7

    Awesome lectures! The most didactic and coolest material in that topic I found until now. Thank you professor Mubarak, and everyone involved.
    PS: On the example showed in 10:42, at row 3, column 6 I think there is a mistype on the slide, it needs a "-" signal (-20 instead of 20).

    • @AVI470
      @AVI470 9 років тому +1

      +Charles Prado thanks for saying that now I know I'm not alone.... :-)

  • @sociamix
    @sociamix 6 років тому +2

    Best filtering course I've seen :)

  • @coffle1
    @coffle1 9 років тому

    Probably the best lecture series on computer vision out there!

    • @coffle1
      @coffle1 9 років тому

      Watch the 2014 series if you're going to go ahead with watching these videos though. It's a lot clearer!

  • @XavierSoriaPoma
    @XavierSoriaPoma 9 років тому +3

    Thanks Dr. Mubarak your lecture was very helpful

  • @LarghettoCantabile
    @LarghettoCantabile 8 років тому +3

    At 13:33, there is an error in the slide (but not in the explanations). The gradient direction is atan of f_y over f_x, not f_x over f_y.
    Around 21', the definitions of correlation and convolution are fishy; it's not f * h that's defined, but f * h [i, j], and the indices i and j should be used to select cells in f, not in h.

    • @bat102
      @bat102 7 років тому +1

      good point. I noticed these but since i'm not confident i thought maybe there's something unearthly about it that's beyond my scope. thanks for sharing

    • @tarunuday
      @tarunuday 7 років тому +1

      Also at 10:16, the central difference is the given expression divided by 2

  • @AngelusMortis1000
    @AngelusMortis1000 8 років тому +1

    this is really good, it puts Things in perspective.

  • @danieldetone337
    @danieldetone337 12 років тому

    Awesome videos Prof Shah! I believe in 1:02:30 the x derivative filter should be [-1 1] instead of [1 -1] if you follow the same coordinate system as earlier in the video.

  • @niteshbisht8801
    @niteshbisht8801 7 років тому

    very nice content
    better than many overrated course..

  • @himanshuinca
    @himanshuinca 6 років тому

    wow ! the way he explained ! really good lecture !

  • @EngBandar1
    @EngBandar1 10 років тому

    Why the gray level is from 0 to 250? Shouldn't it be from 0 to 255 (i.e. 2^8 = 256 ) at 1:13 min.

  • @AlDumbrava
    @AlDumbrava 11 років тому

    It seems weird to me that x determines the row and y determines what column it is (0:03:38). When i think of x, it always spans from left to right and y (in 2D) spans from bottom to top when i think of math and top to bottom when i think of computer science. I realize that the assignment is completely arbitrary, but this declaration seems to add unnecessary confusion.

  • @rahimpanahi2302
    @rahimpanahi2302 12 років тому

    Thanks a lot man :) , it helped me through my MV problems

  • @annaz1652
    @annaz1652 9 років тому +21

    he starts the next lecture on edge detection at 53:18

  • @kushaltm6325
    @kushaltm6325 6 років тому

    @10:25 whle discussing forward / backward differences...... it would have been great to see the "Advantage" of doing such an operation ........ Else, lectures will be very dry.

  • @benjaminmontz2811
    @benjaminmontz2811 9 років тому

    For the correlation and convolution equations (19:31) do the i and j values represent the x and y dimensions of the kernel? so for a 3x3 kernel i and j would be 3 and 3?

    • @Lainlein
      @Lainlein 8 років тому

      +Benjamin Montz I believe they are related to the pixel of the resulting image.. For the convolution, at least, if you put the indices of the kernel as (-1, -1) to (1,1) (or generally, centralize the kernel, so for a 5x5 kernel, indices should go from -2 to 2), then f*h(i,j) fulfills the equation. For the correlation I found a formula that is basically the same, but f is the kernel and h is the image. Alternatively, one can make an index shift and sum over f(k,l)h(k-i,l-i) with f being the image and h being the kernel. But I'm not 100% sure I understood it, either.

  • @c.danielpremkumar8495
    @c.danielpremkumar8495 8 років тому +1

    I have a very fundamental question.
    An image (original) comes with inherent NOISE. In all these discussions .. I observe that a model for NOISE is added (superimposed) on the original image. Why should you voluntarily ADD NOISE and do all these unnecessary unwanted operations (filtering, averaging etc) instead of removing the INHERENT NOISE in the original image ? All these operations suggested are only trying to remove the NOISE voluntarily added to the original image. To make it simple .. Why add Noise and try to remove it ?

    • @sarathsivaprasad5712
      @sarathsivaprasad5712 8 років тому +1

      We are not adding noise to an image.The noisy image is represented as pure image + noise. Its just modeling the input (assuming all noise i the image are just additions over pure image ). In some scenarios it is not that straight forward. I guess that answers ur question :)

    • @EngBandar1
      @EngBandar1 7 років тому

      You are mixing two different scenarios. In reality, when you take a picture, usually the snapshot has imperfections. You can observe that in cheap cameras when the image contains imperfections or noise for short. Modern cameras like expensive cameras, you barely notice any thing wrong with the image and you will be happy with the high quality (i.e. a lot of filtering under the hood). The second scenario in which researchers need first to generate noise which should resemble the true noise (i.e. at an approximated level otherwise if we know it then we can remove it) and then add it to the images. They need to do this in order to test their filters and conclude if the filters are capable of handling real scenarios. Modeling noise is extremely important in all engineering fields alike.

  • @basicmaths3443
    @basicmaths3443 4 роки тому

    gradient direction is fx/fy or fy/fx? because i hv studied in school time that y component/ x component gives the angle or the direction. if u can clear it how u took gradient direction?

  • @abd_alkader
    @abd_alkader 4 роки тому

    what is the difference between operator and filter ... i'm some confused ???

  • @topherMcC
    @topherMcC 12 років тому

    This lecture series is excellent, thank you for sharing them!
    I found the jump from the image derivative to correlation confusing--it made more sense, though, once the discussion got around to the topic of image filters.

    • @kiangus3491
      @kiangus3491 3 роки тому

      i know Im asking the wrong place but does someone know a trick to log back into an instagram account..?
      I was stupid lost the login password. I would love any help you can offer me.

    • @emersonsaint8164
      @emersonsaint8164 3 роки тому

      @Kian Gus Instablaster :)

    • @kiangus3491
      @kiangus3491 3 роки тому

      @Emerson Saint i really appreciate your reply. I got to the site through google and im in the hacking process atm.
      I see it takes a while so I will get back to you later when my account password hopefully is recovered.

    • @kiangus3491
      @kiangus3491 3 роки тому

      @Emerson Saint It did the trick and I finally got access to my account again. Im so happy:D
      Thank you so much you saved my ass!

    • @emersonsaint8164
      @emersonsaint8164 3 роки тому

      @Kian Gus Happy to help :D

  • @MillerTimeReal
    @MillerTimeReal 7 років тому +2

    Does anyone understand what i and j represent in the convolution/correlation equations? There are summations over k and l but not over i and j. Thanks!

  • @dangsonha66
    @dangsonha66 8 років тому +1

    Thank you Dr. it is helpful

  • @hafizsofian6122
    @hafizsofian6122 8 років тому +2

    I have problem on understanding how the equation of correlation and convolution works (especially the i and j, where they come from?). As for k and l, it is just the coordinate of the pixel right? Hope someone can explain this to me.

    • @kshiray
      @kshiray 8 років тому +2

      +Hafiz Sofian
      (i, j) is the (row, column) coordinate in image f.
      (k,l) are the coordinates in kernel h.
      Correlation or convolution is applied at every pixel (i,j) in image f by iterating over every element (k,l) in kernel h.

  • @MohammadFarhadBulbul
    @MohammadFarhadBulbul 9 років тому

    What a nice lecture!!!!!!!

  • @LoganDunbar
    @LoganDunbar 10 років тому +1

    Brandon Pelfrey You say the central derivative should be divided by 2, but why is the mask still [-1 0 1]? It is stepping over 2 pixels, is delta x not 2 then?

    • @LoganDunbar
      @LoganDunbar 10 років тому

      And if you take a look here www.holoborodko.com/pavel/numerical-methods/numerical-derivative/central-differences/ the formula he derives ends up being divided by 2h, so I'm a bit confused how we can just find the difference of the previous and next pixel in our masks, without halving the result. Thanks!

    • @thecguyl3056
      @thecguyl3056 9 років тому

      Logan Dunbar i think that he made a mistake because by logic and even on wikipedia in the central difference there is a division by 2*delta x which in our case is one...

    • @lyyau9689
      @lyyau9689 8 років тому

      +theCguy L Yes, agree with you. The general form should be f(x+h1) - f(x-h2)/(h1+h2). Or the equation change to:
      f(x+1/2) - f(x-1/2) to be correct at 10:19, where f(x+1/2) is the mid point of f(x+1) and f(x), f(x-1/2) is the mid point of f(x-1) and f(x).
      f(x+1/2) should equal to (f(x+1) - f(x))/2, f(x-1/2) = (f(x)-f(x-1))/2
      => f(x+1/2) - f(x-1/2) = (f(x+1) - f(x))/2 - (f(x) - f(x-1))/2 = (f(x+1) - f(x-1) / 2

    • @philippelandry5209
      @philippelandry5209 8 років тому

      Indeed, if you need accuracy for differential equation system solving (ex: fluid simulations), you should use the true central difference mask is [-0.5 0 -0.5], but in detection applications, where you are searching for extremums or zero crossings, you can drop the division.

  • @Justiceisneed
    @Justiceisneed 7 років тому

    thanks alot. it is too useful

  • @ashfaqueshaikh1291
    @ashfaqueshaikh1291 9 років тому

    Thank You sir. Great lecture

  • @wakil2011
    @wakil2011 12 років тому

    Really helpful

  • @quynquyn
    @quynquyn 11 років тому

    Hi, There should be a division of 2 at 10:11 for central difference?

  • @zhenchaozhang60
    @zhenchaozhang60 8 років тому

    Really nice, thank you!

  • @DebadarshanParida
    @DebadarshanParida 8 років тому +3

    in center derivative , I think, the value should be (f(x+1) - f(x-1))/2, because delX = 2 here. can anyone comment on it

    • @malharjajoo7393
      @malharjajoo7393 8 років тому

      This is a fair point , which is what I might have done as well,
      but I believe here he is trying to provide an intuition of the derivative process in images.
      An alternative way of thinking can be -
      Since you want to detect edges , you want to look at the two sides of a pixel , hence you ignore it ( by placing a zero at the pixel location ,like in centre derivate mask ) and hence you would have delX = 1 ...

    • @naiden100
      @naiden100 7 років тому

      I think it too

  • @abderrahmanebououden5173
    @abderrahmanebououden5173 4 роки тому

    thanks sir

  • @videofountain
    @videofountain 10 років тому +2

    In the video. ua-cam.com/video/1THuCOKNn6U/v-deo.htmlm14s. The slide indicates .... function f = image ... f = kernel . Is that correct?

    • @ctltpro
      @ctltpro 10 років тому +1

      no it was a mistake. The prof. didn't notice it. It should have been h = kernel.

  • @fijaisonjd
    @fijaisonjd 11 років тому

    really helpful!

  • @alexwchenpro
    @alexwchenpro 12 років тому

    do we have lecture slides for this that I can print out?

  • @TheHalalPolice
    @TheHalalPolice 10 років тому +1

    I don't get the derivative mask part! how does it work?

    • @MrTienDH
      @MrTienDH 9 років тому

      Ahmad Rashad Same to me, it's confused

    • @TheHalalPolice
      @TheHalalPolice 9 років тому +2

      tien dh got it, you use this linear algebra convention to calculate derivatives, so, if we want to calculate delta I(x) as I(x+1)-I(x) we calculate it as I(x+1) + (-I(x)) where I(x) is intensity value of a pixel number x
      now let us assume we have a matrix nx2 and we want to get its partial derivative wrt x axis, if we multiply this matrix by the vector [-1,1] we will have a vector I' with n dimensions, having each element as I(x+1)-I(x)
      the use of this filter is by repeating the above operation for each column from 2 to m where m is number of columns, as if we are sliding this vector across x axis
      we can perform the same operation across y axis, or z-axis if this is a volume image

    • @coffle1
      @coffle1 9 років тому

      +Ahmad Rashad Was looking at it for probably 15 min, no joke. The weird thing about it is the masks aren't used as matrices being multiplied, but rather just as an overlay on the corresponding positions, summed up.. So at 16:00, he says "-1 to 10" three times because he's multiplying each mask position to the corresponding number in the image. He then sums up the column, and divides by 3 to average. I don't know why the operation is defined so confusing.

    • @ramandutt3646
      @ramandutt3646 6 років тому

      I don't think he was following the rules of matrix multiplication. He was just multiplying corresponding elements.

  • @liamji573
    @liamji573 9 років тому

    like this, very helpful

  • @nikilkaranjit5430
    @nikilkaranjit5430 6 років тому

    can any one help me with gaussian filter with kernel convulation

  • @harshpandey6799
    @harshpandey6799 7 років тому

    www.cs.cornell.edu/courses/cs6670/2011sp/lectures/lec02_filter.pdf for better understanding the video

  • @tnmygrwl
    @tnmygrwl 7 років тому

    Watch on 1.5x, it's a lot better.

  • @jojodi
    @jojodi 11 років тому

    Yes.

  • @beenking3281
    @beenking3281 10 років тому

    Why there is not English Subtitle . My English Listening is so poor. I really admire those who are speak English.

  • @zhangyan4394
    @zhangyan4394 12 років тому

    good!

  • @igormihajlovic6820
    @igormihajlovic6820 11 років тому

    Who the hell is Pavel Babenko...

  • @eracube0
    @eracube0 12 років тому +2

    In 19:10 f = Image and h = Kernel

  • @malharjajoo7393
    @malharjajoo7393 8 років тому

    The explanation is quite good.

  • @jamminjan2349
    @jamminjan2349 4 роки тому

    So... every teacher uses the same ppt.

  • @fijaisonjd
    @fijaisonjd 11 років тому

    dont like the ads..

  • @ashwinsankaran9494
    @ashwinsankaran9494 11 років тому +1

    most confused prof ever!! DO NOT WATCH!!

    • @everblut
      @everblut 11 років тому

      explain yourself.

    • @amarug
      @amarug 11 років тому +20

      seems extremely clear to me.

    • @EngBandar1
      @EngBandar1 10 років тому

      @amarug, I agreed.

    • @mnzavachris5423
      @mnzavachris5423 7 років тому

      can any any prof from anywhere just do this stuff as in uploading videos to teach???

    • @shashanksahu1971
      @shashanksahu1971 6 років тому

      Knowledge needs patience. I think this tutorial is amazing and upto the point.

  • @vernonzhou7454
    @vernonzhou7454 5 років тому

    very clear and helpful, Thx!