Applied DSP No. 8: Filtering via Fast Fourier Transform

Поділитися
Вставка
  • Опубліковано 6 лют 2025

КОМЕНТАРІ • 25

  • @georgephillips3487
    @georgephillips3487 3 роки тому +7

    This channel is amazing... honestly lost for words. This could easily complete with 3b1b.

  • @theDemong0d
    @theDemong0d 3 роки тому +6

    These videos are really stellar, please keep it up. I learned all the mathematical underpinnings of this stuff in my physics degree but this summary is just what I needed to wrap my head around the practical aspects to do some low level audio programming. Thanks!

  • @lutzmmobil
    @lutzmmobil 6 місяців тому +1

    I love these videos, they're super helpful in preparing for my Technical Computing exam (well it's tomorrow now but I've been preparing for longer obviously)!

    • @JXQU3
      @JXQU3 6 місяців тому

      How did it go?

  • @Tarnith
    @Tarnith 3 роки тому +4

    Thanks for making these videos! Excellent presentation, this makes a great DSP study aid.

  • @gadirom
    @gadirom 3 роки тому +1

    Some of the best videos on dsp I've ever seen!

  • @arianamzp7421
    @arianamzp7421 3 роки тому +3

    great high quality content! learned a lot. keep it up.

  • @nihar2001
    @nihar2001 3 роки тому +2

    Thank you so much for these video. Please continue with the series (fast😅)

  • @standardengineer
    @standardengineer Рік тому

    Great video! I wish the overlap-save method was also shown as your animations really bring the DSP steps to life!

  • @kevinchubb3125
    @kevinchubb3125 Рік тому

    Very nice!

  • @mattrobinson999
    @mattrobinson999 Рік тому

    Absolutely brilliant explanation. Thank you.

  • @noharahien
    @noharahien Рік тому

    5:25 Could you please explain how does this phenomenon relate to the concept "aliasing" more?
    From what I've seen, the overlapping isn't a result of lack of resolution in freq domain, because if you do not do zero-padding, but only increase the fft size so that more samples are being taken each time to do the fft, you get the fft result in a doubled resolution, but you're not solving the time domain overlapping at all. It's still overlapping in the size of the convolution window.
    Generally very good content, thank you so much for all these videos!!

  • @dranoel9244
    @dranoel9244 2 роки тому +1

    Thank you for these awesome videos! :)
    Also I can't keep myself from pointing out (which probably nobody has every done^^) that you first name might be condensed to "calf" :P

    • @youngmoo-kim
      @youngmoo-kim  2 роки тому

      Story of my life… though I don't think I qualify as "young" anymore 🙂

  • @Banzay20
    @Banzay20 Рік тому

    Your intro are so cool, love it, and the content is excellent! Do you use any music software for your intro?

  • @goofi953
    @goofi953 Рік тому

    Can you do a video on frequency/phase recovery? I love your videos!

  • @smoua4588
    @smoua4588 2 роки тому

    Thank u for this video

  • @willfeng6149
    @willfeng6149 2 роки тому +2

    great video. Where does Epi 5 go?

    • @youngmoo-kim
      @youngmoo-kim  2 роки тому +1

      Just posted it, sorry for the long wait. I'm teaching the course again this Fall, so there will be more frequent updates in the coming weeks!

  • @ytubeleo
    @ytubeleo 3 роки тому +1

    Big O notation describes what happens at the limit but doesn't take into account the complexity of the operations involved in each step. Presumably convolution starts out being far, far easier than FFT. For example, I would imagine that the given example of a 257 sample long moving average convolved is much easier computationally than taking the FFT. For a signal of 1,024 convolved with a filter of length 257, that's 263 thousand integer multiply-and-acccumulate operations and 1024 division operations, plus the bit of extra signal at the end, I guess. But my guess is that the 31,744 "Fourier operations" (and more with zero-passing) are going to be way more complicated than simple MAC operations and one division for each sample, or am I wrong about that?
    Can you give some indication of roughly when it makes sense to switch to using FFT instead of convolution?
    Actually, in the context of a moving average filter, I don't think it's ever going to make sense, since the big O notation for a moving average filter is actually O(N), so it's never going to be better to use FFT. Or am I totally misunderstanding something here? Thanks.

    • @youngmoo-kim
      @youngmoo-kim  2 роки тому +1

      It's true, big O is a crude approximation, but the general principle holds. It used to be that we could directly compare the number of multiply-accumulate (MAC) operations, which is the bulk of the computation (those comparisons also relied on tricks like having precomputed sinusoid tables, which take up some memory). These days, both processors and FFT libraries are so highly optimized (with very little overhead) that it's difficult to know exactly how many operations/cycles are involved. Some software implementations (MATLAB) will even make the decision for you (implementing an FIR filter as a straight convolution or via FFT depending on the length of your inputs). Generally, if you have signals/filters of more than 100s of samples, the FFT is more efficient, and when you're dealing with long convolutions (like the reverb response of a large room, which can be several seconds), the FFT is *way* faster.
      Also, the straight convolution is definitely O(N^2): the number of operations is proportional to the square of the length of signal/filter (you have to compute every shift of the filter against the signal). Hope this is helpful!

    • @ytubeleo
      @ytubeleo 2 роки тому

      ​@@youngmoo-kim Thanks for your reply, and I really hope I'm not wrong and wasting your time here, but are you saying that the big O notation of a FIR filter of a specific length Q applied on a signal of length N is O(N^2)?? My understanding is that it is O(N), and therefore is much faster than processing using FFT as long as the FIR filter is much shorter than the signal length. If not, can you please explain what I'm wrong about?
      In the example given of a moving average filter, if the signal is 1,000 times as long, it's going to take 1,000 times as long to process, right? Not 1000^2 = 1,000,000 times. Whereas doing by FFT will take 1000*log2(1000) = 9,966 times as long. So using FIR will be far, far faster.
      This is because the FIR filter is much shorter.
      In your example of the FIR moving average you make the FIR Filter the same length as the signal by adding loads of zeros, but that will just create unnecessary empty calculations, right? So it's really 263,000 operations rather than 1,048,576 operations.

  • @saumoon
    @saumoon 3 роки тому +3

    So... DSP No. 7 is missing ?