Convolution in the time domain

Поділитися
Вставка
  • Опубліковано 25 лип 2024
  • This video lesson is part of a complete course on neuroscience time series analyses.
    The full course includes
    - over 47 hours of video instruction
    - lots and lots of MATLAB exercises and problem sets
    - access to a dedicated Q&A forum.
    You can find out more here:
    www.udemy.com/course/solved-c...
    For more online courses about programming, data analysis, linear algebra, and statistics, see
    sincxpress.com/
  • Наука та технологія

КОМЕНТАРІ • 33

  • @romanvereb7144
    @romanvereb7144 4 роки тому +5

    Mike X Cohen - the unsung hero of our age

  • @tranez2205
    @tranez2205 4 роки тому +1

    Awesome video! Thank you so much!

  • @helenzhou3530
    @helenzhou3530 3 роки тому

    This video is super helpful, thank you so much!

  • @IamGQ87
    @IamGQ87 4 роки тому +3

    really very pedagogical. Thank you

  • @weilawei
    @weilawei 4 роки тому +7

    Super clear explanation, very intuitive. Thank you.

  • @jaimelima2420
    @jaimelima2420 2 роки тому

    This is good stuff. Good Job!

  • @kaymengjialyu5086
    @kaymengjialyu5086 3 роки тому +3

    You are such a good teacher :)

  • @sachindrad.a836
    @sachindrad.a836 2 роки тому

    Very nice explanation

  • @violincrafter
    @violincrafter 4 роки тому +5

    Wings of convolution: a good band name

    • @mikexcohen1
      @mikexcohen1  4 роки тому +3

      I'll be the back-up kazoo player.

  • @RenanAlvess
    @RenanAlvess 3 роки тому

    congratulations for explanation, was very enlightening for me

    • @mikexcohen1
      @mikexcohen1  3 роки тому

      Nice to hear. I made this video just for you, Renan :D

  • @ormedanim
    @ormedanim 3 роки тому +7

    you lost me at God's perspective, now I'm flipping (out) instead of the kernel :D But I am very thankful for all the videos and the ANTS book

    • @mikexcohen1
      @mikexcohen1  3 роки тому +1

      Nice.

    • @jaimelima2420
      @jaimelima2420 2 роки тому

      Richard Hamming's Digital Filter explains this god's perspective in a different way, worth checking imho.

  • @williammartin4416
    @williammartin4416 Рік тому

    Excellent explanations

  • @jesusdanielolivaresfiguero4752
    @jesusdanielolivaresfiguero4752 3 роки тому +1

    Is there a way to buy your Analyzing Neural Time Series Data book on credit for monthly payments?

    • @mikexcohen1
      @mikexcohen1  3 роки тому +2

      Hi Jesus. Find my email address (it's on my CV) and send me an email about this.

  • @hurstcycles
    @hurstcycles 3 роки тому

    If the kernel is a morlet wavelet (formed by combining a constant sine wave and gaussian) and symetrical around the mid point, flipping the kernel is not necessary, is that accurate? Thanks for the great video

    • @mikexcohen1
      @mikexcohen1  3 роки тому +1

      Kindof, but be careful with the descriptions: The kernel always needs to be flipped, but if the kernel is symmetric, then flipping has no effect. (Also, sine is an odd function and thus is asymmetric; cosine is symmetric about zero.)

  • @bokkieyeung504
    @bokkieyeung504 3 роки тому

    I'm wondering why not aligning the center of the kernel with the edge of the signal (still need zero-padding, but less extra zeros) so that we can get the result with exact same length as the original signal, thus no need to cut off the "wings"?

    • @mikexcohen1
      @mikexcohen1  3 роки тому +1

      If you are implementing convolution manually in the time domain using for-loops, then yes, that's convenient. But the formal procedure is done to match the implementation in the frequency domain, which is much faster.

  • @prempant6428
    @prempant6428 2 роки тому

    How do you decide what sort of kernel to use?

    • @mikexcohen1
      @mikexcohen1  2 роки тому +1

      That's application-specific. But the procedure of convolution doesn't depend on the shape or length of the kernel.

  • @MrPabloguida
    @MrPabloguida Рік тому

    Is it fair to say that the result signal, even after cutting out the wings, will still be "contaminated" by the zero padding for at least another half kernel length, which would be when it start having a pure and clean signal/kernel convolution? Does it make sense?

    • @mikexcohen1
      @mikexcohen1  Рік тому +1

      It is certainly the case that edges are always difficult to interpret from any kind of filtering. When possible, it's best to have extra time series before and after the period of interest, so that you can ignore those edges.

  • @brixomatic
    @brixomatic Рік тому

    Wouldn't the convolution it be a better representation of the signal, if you could wrap around the edges of the signal?
    I.e. you'd start the kernel's mid point at the start of the signal and take the left half of the kernel from the right side of the signal and if the kernel exceeds the right bounds, take the data from the start of the signal? This way your convolution would have the same length as the signal, but operate only on the signal's data and not sneak in zeroes that have no meaning and pollute the results.

    • @mikexcohen1
      @mikexcohen1  Рік тому +1

      Yes, that's called "circular convolution"; what I explain here is "linear convolution." Both methods produce edge effects that should not be interpreted.

    • @user-vo7oe1be8j
      @user-vo7oe1be8j 9 місяців тому

      ​@@mikexcohen1 teacher. I want to make sure if if my thoughts are correct.
      The edge effect will occur when we use 'Convolution Theory' to obtain the result of the convolution between two signals.
      This is because 'Convolution Theory' uses FFT. If the max frequency of the two signals exceeds the Nyquist Frequency, aliasing will occur. This is why it's called the "edge effect", right?
      Sorry I'm not native English speaker, if something's confusing, please correct me.