Lecture 11: Mean Shift

Поділитися
Вставка
  • Опубліковано 6 лют 2025

КОМЕНТАРІ • 19

  • @alexwchenpro
    @alexwchenpro 12 років тому +14

    "The best University today is the Internet", Thank you Professor Shah for being part of the Best University. I learned a lot from your lectures.

  • @samrockseagle
    @samrockseagle 11 років тому

    The good thing about the Professor is he puts things in simple way. In a perspective focused towards students and not like some presentation to colleges in a scientific community. Thank you Sir.

  • @theronisaparticle
    @theronisaparticle 6 років тому +2

    First of all, thanks a lot for sharing the class. It is genuinely articulating and helpful!!!
    Several points to be discussed:
    1. PPT page 52. As per the second reference paper, the algorithm updates y0 not after each frame but after each iteration, which is incongruent to the PPT. At least the PPT seems not to show it clearly; or is it an intended modification?
    2. PPT page 51. Again as per the reference paper, the weights are to multiply corresponding kernel value in the profile according to ||Y-Xi|| before multiplying Xi and summing up etc.. Methinks that the PPT shows it on page 49 yet seems to omit it accidentally in page 51;
    3. Question. Is writing Kronecher Delta function in the equation as accumulating weighted histograms only a fancy way to say that only one kernel value out of m values will be added upon the corresponding bin in the histogram if and only if the pixel values equals to the bin value? If so, the delta function can be totally omitted in practical coding.
    4. Question. As per the reference paper, a more conservative update of y0 is adopted, that is, y_new = (y0+y_new)/2. Any comment?

  • @shwetaarian7724
    @shwetaarian7724 8 років тому +4

    Thank you so much Sir, It really helped I wish one day I get much knowledge to explain things so clearly like you

  • @vevin1986
    @vevin1986 12 років тому

    Thank you Professor for uploading lectures on youtube...Thank you so much.. :)

  • @linkmaster959
    @linkmaster959 3 роки тому

    Slides 50 -51 doesnt make sense to me, the mean shift vector looks like it should include the kernel.

  • @sumathik5028
    @sumathik5028 4 роки тому

    Thank you very much for the Excellent explanation sir

  • @brandonrobinson7261
    @brandonrobinson7261 10 років тому +1

    @32:47, does anyone know why in the gradient of P(x), (x - xi) changes to (xi - x) ?

    • @shangoonewardena6029
      @shangoonewardena6029 9 років тому +3

      Brandon Robinson It is a mistake in his derivation. The correction substitution is g(x) = - k'(x), hence the k'(x-xi) is equivalent to -g(x-xi) or g(xi-x). Please refer to Equation 13 in courses.csail.mit.edu/6.869/handouts/PAMIMeanshift.pdf

  • @theronisaparticle
    @theronisaparticle 6 років тому

    BTW, since there is a root-squared division by p_u(Y0), which is in the histogram. Yet there is no guarantee that app p_u(Y0)'s are non-zero at some bins. So in case of zero, shall a small number be added to the divisor? Also, should means of RGB's of each ROI's, including the model's, be normalised prior to accumulating the histogram so that the effect of luminance will be alleviated? Thanks

  • @michaelscheinfeild5972
    @michaelscheinfeild5972 9 років тому +1

    how you determine the initial tracking window ?
    manualy ?
    or you do some segmentation or shape detection before.
    since we need to compute the histogram of our roi for the tracking .

    • @jamilusulaiman5237
      @jamilusulaiman5237 6 років тому +1

      you can use background subtraction or manual initialization of the window
      e.g in matlab you can use the function called imcrop() to manually initialize the window.

  • @nehadhasan2540
    @nehadhasan2540 4 роки тому

    thanks, sir. can using this algorithm with the RFID system.

  • @maarijraheem
    @maarijraheem 11 років тому

    Thank you ! it really helped..!!

  • @mufiskool
    @mufiskool 11 років тому

    thank you professor

  • @khadijalaaroussi3116
    @khadijalaaroussi3116 9 років тому

    Thanks very much Doctor

  • @loopc2454
    @loopc2454 6 років тому

    failed to understand the gradient procedure, why k becomes k' when taking gradient, what is k

    • @jamilusulaiman5237
      @jamilusulaiman5237 6 років тому

      k' represent the differential of k with respect to x.
      it is important to note that in P(x) only k was expressed as the function of x

  • @fatarrammah8829
    @fatarrammah8829 9 років тому

    thanks very much