For those of you who want to see me break down further how the FFT Implementation works: checkout out this video: ua-cam.com/video/Ty0JcR6Dvis/v-deo.html Also a subtle mistake that a commenter made me aware of -- at 26:40 instead of replacing w with (1/n * e^{-2 * pi i/ n}), the actual right way to do this is by taking the final output of the IFFT at the end of the recursion and dividing by n. Sorry about that error, and in general all bugs like this one will be updated in the description of the video. So the full change is w = e^{-2 pi i / n} And then somewhere outside the scope of the IFFT function ifft_result = 1/n * IFFT() As one additional side note, there seems to be a little confusion about the point by point multiplication at 6:31. Remember, all that's happening here is we are sampling points from polynomials A(x) and B(x) and using these points to find a set of points for C(x) = A(x)B(x). Every corresponding y coordinate for C(x) will be the y coordinate of A(x) multiplied by the y coordinate of B(x). Therefore, point by point multiplication really means "let's just keep the x-coordinates and multiply the y-coordinates."
@@electronicmusicwave7459 Great question and the answer is that multiplication of polynomials is an indirect way to get to the DFT. The key idea is that one of the fastest ways to do polynomial multiplication is take coefficient representations of each polynomial, evaluate them at the appropriate number of roots of unity, multiply the y-coordinates in this new representation, and convert back to coefficient form. The DFT shows up in the evaluation step because evaluation of polynomials is just a matrix vector product where the matrix is the DFT. Remember, at its core, the FFT algorithm is just a fast way to multiply a vector by the DFT matrix -- polynomial multiplication just happens to be one case where the DFT shows up, so we can use the FFT. The DFT also shows up in all sorts of other applications such as taking a time domain signal and converting it to a frequency domain representation, image compression algorithms, acoustics, etc. So there's nothing special about polynomial multiplication other than the fact that the fastest way to do it involved polynomial evaluation which has the DFT.
@@Reducible Divide by 2, rather than by n, I think? Otherwise, in the recursions, we would divide by n/2, while calling on recursions that divide by n/4 themselves, and so on... Really like the video, by the way!
Back in 1972 when I was a young physics grad working in an optics lab, I built an FFT optical spectrometer using a Michaelson interferometer using a laser reference beam as a ruler to trigger my computer program as to when to take samples of the sample beam, using an appropriate set of A/D converters. I scanned the interferometer by advancing one leg with a fine screw and we took 16,384 samples, which made for a nice resolution. Written in Fortran for a Data General Nova with a 5 pass compiler, then hand optimized the assembly code to make it efficient enough to run in real time, displayed on a long retention phosphor oscilloscope as it scanned. I even made a hardware assisted program instruction to do the bit reversal and shift to speed it up.
I had to read the last sentence just to make sure this wasn't one of those overly complex "andddd I have no idea what I'm talking about. I just made all that up." Memes 🤣 don't mind me, smooth brain over here 😭
6:58 not going to lie, I spent an inordinate amount of time trying to figure out how it could possibly be O(n)! before realizing the exclamation was only punctuation, not a factorial
spent a crazy amount at 17:30 because w^(n-1) (that would mean 15 points - 1 = 14/2=7; but that lacks one point to reach the opposite site) doesnt really work out when the first is 1 ... . suppose its 16 points divided by 2. then it works just fine :-p . or did i overlook some math context from before? has he uploaded a correct python code by now?
Years ago when I got to sound design, I was so excited that you can represent sound as both a waveform and a set of frequencies. I heard about this mysterious "Fourrier transform" that was responsible for that and the famous "FFT" algorithm that made it possible for computers to do things such as equalization quickly. I saw some videos on UA-cam that explained it but lacking the proper mathematical background, I couldn't wrap my head around it. Now this video got to my recommended, with your excellent step-by-step explanation with actual examples. I still can't wrap my head around it.
@@markuscwatson Even better, learn it from a music producer's point of view. Once you get a grip for what filters and EQ do to audio, thinking about signals in the frequency domain makes a LOT more sense.
As an idiot, I feel like a blind man at a demonstration of optical illusions watching this video. I can tell there are really cool things happening, I just can't see them. OTOH, the 3Blue1Brown videos on FFT were inspirational. I really need to take some math classes. I love everything about audio. I like building speaker enclosures (physics!), I like coding (math!), I like building electronics, and I would love to do some DSP development. But when I start trying to grok the math behind all of that stuff, it feels like somebody took the numbers and four-function buttons off my calculator and now it's all Greek to me.
@@imlxh7126 The reason music relates so closely to frequency is that our ears are sensitive to frequency, but not to phase. A sine wave of a given frequency sounds almost exactly the same if it is delayed by a few milliseconds. Two sinewaves of different frequency sound the same if one is shifted relative to the other. Computers that recognize speech begin with frequency analysis.
13:00 for those of you who are wondering why he is plugging in the square of x_i. It's because when we use the coefficient form, if we just plug in x_i, it would be interpreted as powers 0,1,2,...., so if we plug in x^2 it would be interpreted as (x^2)^0. (x^2)^1, (x^2)^2, (x^2)^3, which are powers 0,2,4,6...which are even terms.
An aside: When you mention that any polynomial of degree N can be uniquely determined by N+1 points? That's how many forms of forward error correction work, suitably modified for modular arithmetic. If you want to send your message you break it up into N parts, each of which becomes a point on a polynomial of degree N-1. Then generate as many extra points on the polynomial as you need to generate a list of K points where K>N, and transmit those K points through your unreliable communications channel. As long as at least N of the K make it through to the other end, you can recover the original message.... and thus your phone call can keep on going even with a little radio interference taking place. You can tune K to be optimal for the reliability of your communications channel and the acceptable level of overhead.
@@AJMansfield1 the ones that I was thinking about going through initially are Reed-Solomon codes and the Berlekamp Welsh algorithm. Both of those techniques rely on similar ideas with polynomials but over a finite field. Pretty cool stuff. Convolutional error correction codes are also pretty interesting so may think about addressing that as well. I have an insanely long list of video ideas that I'm just adding to one by one based on request and my interest so I can't make any promises :)
Me! This video wasn't quite as easy to follow as Grant's videos, it was a bit too fast paced, but this guy seems to just be starting out, and he's doing a great job so far!
Just got recommended your channel, really like it! If you need any topic suggestions, I'd love to see a video about the disjoint set and union-find algorithm. That's one of my favorites, but it's not as well known. It even uses the inverse Ackermann function for the big-O analysis too which is super weird and interesting.
Hey, didn't expect this! I absolutely LOVE the work you do CodeParade! Thanks for leaving this comment! I also love the union find algorithm. The ideas that lead up to the ultimate optimization of path compression and the Ackermann function is also quite beautiful and definitely something I'll keep in mind!
My reaction was instead "Holy crap! How has it already been 3 hours!?!" because I like to pause and think things through and just marvel at the beauty of it
This popped up in my recommendations. I have a bad habit of watching long, intricately detailed descriptions of concepts that I do not have the foundational education to understand. This is probably why every time I see a video like this that assumes I have some prior knowledge, I wonder how I ended up here. The video itself, however, is spectacular. Very easy to follow, even with my lack of understanding. (I don't think I am describing this very well, but you'll have to trust me.) I work as a cable technician and have a very VERY basic understanding of RF signal processing theory. It's always interesting to peer into the depths of what keeps it all tied together. I keep telling myself that one day I'll work up the nerve to pursue a more structured, formal education in these topics, but in the mean time I am glad that such accessible presentations exist out here in the wild.
Thank you so much for taking the time to write out such a nice comment! I try to make the videos as accessible as possible, but it can be tough especially when we are talking about something as advanced as algorithms to calculate fourier transforms. But I'm glad you were able to get something out of it!
This is a great video. I love the close interconnection of serious mathematics and efficient programming. This didn't just help me better understand tricks for efficient algorithms (the log n trick is awesome), but also understand why the fourier transform is defined as it is. I mean, I understood the mathematics since before, but this gave so much intuition. Some background knowledge defenitly helps here, I think, but I love the level. Please keep it up! Simpler concepts are covered pretty well elsewhere too, but this might be the best video on FFT on all of UA-cam by quite a bit (imo). Keep it up! See you when you've totally blown up :)
It won’t - watching this as an common programmer is like if as a worker you would watch how shovel is made in a forge. Nice to watch and know how it is done, but won’t do the shoveling 😀
I like the fact you actually laid out the pseudocode for the algorithm at the end of each section, most channels that cover topics like these usually leave it at the block diagram. Shows you really know your stuff.
Very helpful video. However, I'd like to point out that @10:50, the decomposed functions should be P_e(x) and P_o(x) not P_e(x^2) and P_o(x^2) because once we do substitute in x^2, it should get us back to the original function. No one has pointed out this mistake so I figured I would because I was confused for a little bit.
For anyone has the same confusion - It's actually correct. You start with a function P(x), which in the example, is something to the x^5. You x is input, the function P(x) is something of x^5. To decompose, you split this x^5 into two parts, in the video example, it's x^2, x^4 and odd part is x^3 and x^5, except x^3 and x^5 can be expressed as x*x^2 and x*x^4. The two decomposed function, P_e and P_o, is a function of x, or you can think it as function of y, where y=x^2. So essentially, 2x^4+7x^2+1 is P_e(y) = 2y^2 + 7y + 1. So it's not a mistake, x^2 just mean P_e(y) will take input where the input is x^2 from the original function. FYI - All the video content basically came from the book called "Algorithms". I think combining the book with the video makes a lot more sense...coming from somebody reading the book, don't understand a thing, had to watch the video, then video makes a high level sense but omits a lot of details so had to go back to book again.
@@lbognini Hey, I just got confused by that as well. So maybe let me try to explain it the way I think I understood it: So P_e(x^2) = 2x^2 + 7x + 1. Looks weird, but let's try to calculate a value of a position. Let's call that position "a". If you now substitute your position a into the formula of P_e, you will notice, that it basically stands where x^2 was standing before: P_e(a) = P_e(x^2) So a = x^2. Now you can just calculate P_e(a) like normal: P_e(a) = 2a^2 + 7a + 1 but since a = x^2 we can write P_e(a) = 2(x^2)^2 + 7(x^2) + 1 = 2x^4+7x^2+1 Hope that helps! 😁
The best way to understand the FFT algorithm is to do it several times for different signal types by hand, starting off with simple signals. Get the equations, draw the diagrams, and then get the outputs and even write a script to do it if you're inclined. Sometimes you just have to follow the rules and steps, and trust that it will all make sense over time!
I used to teach this, and I always found a concrete example was helpful to students. While FFT applies anywhere, we were working with time based signals and extracting the frequencies. While you showed that at the start, I feel you should finish with that. The classic teaching approach of telling the class where you will be taking them, taking them there, and summing up by revisiting where they've been.
Dude, you uploaded this video like two minutes before I searched for a video on how FFT works. Not even subbed but this was one of the first videos to pop up. That was a ridiculously perfect timing! Also, I'm getting strong 3blue1brown vibes from your channel, subscribed right away!
Honestly among my favorite youtube videos ever created. I've never felt such an urge to like a video. Every time I'd come back, I'd be disappointed that I've already liked the video and can't do it again.
this is a ridiculously awesome amount of work, possibly hundreds of times as long as the video. it hurts to think this will make less money than someone doing a makeup tutorial
I just read about the FFT from CLRS, where the authors directly went into choice of evaluation points as complex roots of unity. But your intuition and the way you deduced that is brilliant! Thanks man!
I’ve been curious about FFT for a while and has some rough ideas about its applications, but how it works is still a black box to me. Thanks a lot for making such useful educational content - this is the best explanation on FFT I’ve seen so far, oh and great video editing as well. I’m sure this will stick with me, and even if I need some recap I’ll go back to this video. Keep up the good work!
I'm a software developer with a background in sound engineering... so I'm use to peering into "black boxes" only to find the math is way over my head... I just trust the magic, LOL. THIS video was about to be more of the same were it not for the final wrap up at the end - WOW! Well done, sir. Well done!
ive parked on this topic for the past 9 weeks, and only now thanks to this incredible video the pieces are finally coming to fall into place! Thank you so much!
One of the first videos on UA-cam I’ve seen that intuitively implements a complex mathematical idea in code. I appreciate how must respect you have for your viewers by showing them the steps to solve this problem without “baby talk”. Absolutely fantastic work! Also, I am aware of 3B1B, Primer, etc. Those channels are also fantastic but I enjoyed how this video actually showed the code implemented from scratch.
You deserve all the compliments you’re reading. I’m fairly familiar with this topic and you did a great job explaining it, easily better than how it was first explained to me.
i had this exact algorithm in a numerical programming class in university and my professor did a fairly poor job at explaining what FFT and IFFT really are - he just showed us the algorithm and how its recursive call works. this video makes me truly appreciate how ingenious this algorithm actually is, and knowing where the algorithm comes from makes understanding it a lot easier. thank you for this amazing explanation!
@@igornoga5362 Common, but probably very unfortunate. Too many students will spend too much time trying to figure out how the magic box works that they lose focus on the rest of the lesson. It's hard for engineering students (and humans in general) to totally compartmentalize "oh, just pretend it works for now".
This might be one of the first videos other than travel vlogs where I fell in love with it and subscribed. FFT had been a nightmare, everytime I started learning it mind would just wander off. this video is soooooo good. Thank you for making this. And being beautiful ... you have added beauty to the FFT algorithm.
Mind-blowing... Never do I think of FFT when speaking about polynomial multiplication, nor do I ever see so clear the core of the algorithm. My first encounter with FFT is horrible, with the overwhelming notations making me dizzy. This channel absolutely deserves more views.
The multiplication is basically convolution in frequency domain. Which is super fast due to being just a piece-wise multiplication of frequencies' magnitudes and an add operation on the phases. Where the magnitude and phase corresponds to the polar representation of the complex coefficients (FFT outputs in cartesian coordinates, but can be transformed in polar coordinates for a better understanding). The fact that we can perform convolution in spectral domain as a simple piece-wise multiplication is super-important, as convolution done in temporal domain implies every term of one signal to be multiplied with every term of the other signal, resulting in a complexity of O(n^2), which is most of the time impractical in some situations. By taking the signals in spectral domain by performing FFT on both, the convolution's complexity is just O(n)
ok i remember being blown away the first time i learned about fft and i only saw it only the context of signal processing. this is truly amazing. i would watch the big O notation if i didn't just have a month doing so already this semester. your video was amazing and i am really glad i came across it. really inspiring keep up the good work!!!
This is insanely good work. I wish I had a better sense of how Euler’s identity, complex numbers, polynomials, and the cycle of differentiation and integration all relate, because everything seems to boil down to these ideas at the heart of mathematics.
This is a fantastic video! Feels very much like the style of 3b1b- not just bc of the animation, but also bc you have a similar way of carefully yet thoroughly explaining complex ideas. Well done!
Great video!! I learned FFT in the classical context, but this is amazing! (Thanks 3b1b for suggesting 😄) Just one detail, I feel we cannot apply the 1/n directly to omega (if the inverse is right). Because 1/n is applied to the 1 values on the matrix as well as the omega is raised to some power that would affect the 1/n too. =)
I really wanted to understand FFT, so I took the time to thoroughly watch 3Blue1Brown's, Veritasium's, and your video on the topic, and your approach helped me the most! Maybe because I watched it last, but still, you did a phenomenal job. Well done!
I am glad 3b1b gave you a shoutout so that I got to see this! very nice video, I especially liked seeing the actual code implementation and of course the smooth animations :)
FFT made me fall in love with engineering but in my humble opinion your vid didn't quite get out there enough with it. Visualizing how it applies to real world applications(signals specifically which covers pretty much everything in the world) would've made it much more interesting because it's truly amazing how much FFT can solve complex problems that would be extremely difficult and time consuming to solve without applying it. Fourier transform is a relic of art not many get to truly experience. My perception of the world got slightly better after learning it and it all became so fascinating... God bless you Fourier
I don't even have a layman's understanding of complex mathematics, but I was very pleased to be along for the ride, just out of curiosity. Beautiful presentation and I hope to be able to revise and understand more one day.
For the context, the reason that we can compute polynomial multiplication efficiently with FFT is based on 2 facts. 1. multiplication of polynomial in coefficient representation is a *convolution*. From wikipedia (en.wikipedia.org/wiki/Convolution), "The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of two polynomials, then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences." 2. The product of 2 sequence in frequency space (the representation you get after applying Fourier transform) is the frequency space representation of the convolution of the original sequence (in time space), or (f * g)^ = f^g^. Here f, g are 2 functions, * is the convolution operator, and ^ means the frequency space representation (or apply Fourier Transform)
Absolutely beautiful, I always wanted to understand as to why FFT works! The FFT has always been presented to me as a thing that can map vector convolution to vector multiplication element-wise, and now it looks astonishingly clear why. The trick is, the convolution (or as I sometimes call it, long multiplication without carrying) is *exactly* what you should do in order to multiply the polynomials in coefficient representation, therefore, you can apply it anywhere where convolution takes place. Thank you for this video!
Yea for signals convolution in the original time domain is the same as multiplication in the frequency domain. This is exactly analogous to convolution of the coefficient representation (using the distributive law) being the same as pointwise multiplication of the value representation. This is the convolution theorem.
Excellent. I was looking for info on the NTT and iNTT over finite fields, and this is a perfect explanation that doesn't incorporate any of the "motivating" nonsense about waves or rotations that aren't relevant for my particular situation!
This is such an amazingly lucid visual explanation of Fourier! I think Laplace, himself would almost be proud, if not infuriated. So much to visualize, but so little time! I mean frequency…
My mind wasn't blown even once but I didn't arm it first. I'll rewatch this when I have read up on roots of unity and DFT matrices. I did enjoy the content even though it was currently way over my head ^^
I was about to comment about the voice's EQ and compression, then I saw the video date and checked your newest video. Congrats on the progress, much more understandable!
Thanks so much for this comment! Audio was definitely one of those parts of the channel that was admittedly quite weak in the early stages. Got a new microphone, learned a lot more about doing voice overs properly, and it's nice to see that someone notices :) Usually, the gauge for whether it's good is that no one complains :P
@@Reducible The classic work of behind-the-scenes engineers: when everything is built meticulously no one notices, get one thing wrong and it's top headline. I'm glad you liked my small token of appreciation! :)
When I was an undergrad in Computer Engineering, I had coincidentally taken an Algorithms class and a Signal Processing class in the same semester. During one of our DSP lectures, our professor made a throwaway comment saying, "Oh if you know what merge sort is, you already know how to do FFTs." That one sentence blew my mind so hard that no joke I had an existential crisis for a week. Thinking about it continues to give me goosebumps to this day.
spent 6 hours figuring this out, and it blows my mind! What a great algorithm! This Fourier transform has a lot of applications, mostly prominent in signal processing. An ingenious algorithm to the core.
Amazing! I somehow missed this part of signal processing course so was using FFT as a black wonderbox almost everyday. Now I have a clear view on how it's acually working. Thanks a lot, man! Your work is highly appreciated. Special thanks for using Python for code representation.
The video is really great and I understood it very well but what I don't understood is how this all relates to time domain vs frequency domain. The FFT transforms a signal from time domain to frequency domain but here it transforms a polygon from coefficient domain to value domain? Aren't these two entirely different things? How is coefficient domain the same as time domain and value domain the same as frequency domain?
The initial coefficients represent an equation that maps across the time domain (X is understood as time), and the resulting coefficients represent an equation which maps across the value domain (X is understood as value/frequency).
That was a connection I did not realize, doesn't seem like there's much literature about such a connection, but definitely is interesting. Thanks for teaching me something!
Ben at a high conceptual level all the coefficients of the two polynomials being multiplied will need some connectivity to each other. This is just like needing to have a circuit path between all pairs that need to communicate in say a telephone network.
I've found a lot of applications for the FFT over the years. Stochastics, statistics, machine learning, music, audio engineering, and so much more. As much as its elegance and ubiquity is stated in the video, it still somehow doesn't seem to do the algorithm justice. It is baked into everything.
@@keris3920 - Great examples. A software defined radio (SDR) can replace a heap of components used to process the radio frequency signal with the FFT, and then replace a lot more components used to process the resulting audio. I get the impression that a more computationally capable human brain could listen to the radio by simply thinking of the FFT, without needing any electronic components.
@@Liberty4Ever in audio engineering, it's almost paramount to do a lot of frequency-domain work in software using FFTs and wavelets. If you record something that's modified by the hardware, there is no going back and fixing it later. Music in general has a place for both HW and SW implementations of the FFT and frequency-domain modifications in general, so it is a great example (even if the average Joe doesn't have to consider the FFT when listening to music, someone has used an FFT to assist in almost every recorded work of art to date).
I never went into understanding FFT, because it always seemed so scary to me - even after I understood the Fourier transformation itself. But holy math, this explanation is brilliant! The animations are so smooth (3B1B did a really amazing job with his module!), and everything is clean and understandable. The length of the final code is completely mind blowing! I got so hyped I wrote my own Python implementation, which contains the entire polynomial multiplication, using the FFT implementation you provided (with a few minor modifications). So here it is! Notes: - in FFTCore(), sign decides whenever we use FFT (sign=1), or IFFT(sign=-1) - using list comprehensions instead of for cycles - expanding the polynomials so we always operate on length of powers of 2 - pretty printing the result (the limitations of float values bring near-zero but non-zero imaginary components after IFFT) import math, cmath def FFTCore(P, sign): n = len(P) if n == 1: return P ye, yo = FFTCore(P[::2], sign), FFTCore(P[1::2], sign) w = [cmath.exp(sign*2*cmath.pi*1j*i/n) for i in range(n//2)] y1 = [ye[j] + w[j]*yo[j] for j in range(n//2)] y2 = [ye[j] - w[j]*yo[j] for j in range(n//2)] return y1 + y2 # concatenation def FFT(P, inverse=False): n = len(P) assert((n & (n-1) == 0) and n != 0), "Length of P must be a power of 2" if inverse: return list(map(lambda x: x/n, FFTCore(P, -1))) return FFTCore(P, 1) def polyMul(A, B): lenA, lenB = len(A), len(B) lenC = pow(2, math.ceil(math.log2(lenA+lenB))) A, B = A + [0]*(lenC-lenA), B + [0]*(lenC-lenB) Af, Bf = FFT(A), FFT(B) Cf = [Af[i] * Bf[i] for i in range(lenC)] return FFT(Cf, inverse=True) def prettyPrint(C, lenC): res = [f"{x.real:0.2f}" if cmath.isclose(x.imag, 0.0, abs_tol=0.0001) else f"{x.real:0.2f}{['+',''][x.imag < 0]}{x.imag:0.2f}j" for x in C] print(', '.join(res[:lenC-1])) A = [2, 3, 1] B = [1, 0, 2] C = polyMul(A, B) prettyPrint(C, len(A)+len(B))
Minute 6:30: how do you multiply points? I'm trying to figure out the rule and I'm not getting it. By the way: subscribed before finishing the video. Seems AMAZING!
The Fourier Transform is an integration from -∞ to +∞ of a continuous function. Note the integrand (kernel) of the Fourier transform. The Discrete Time Fourier Transform (DTFT) is still continuous with the summation from -∞ to +∞ (continue to note the kernel). The Discrete Fourier Transform (DFT) samples the DTFT into N samples. The Fast Fourier Transform (FFT) is an optimization of the DFT that uses N (power of 2) samples as shown in the video above. This article has links to all of the above: en.m.wikipedia.org/wiki/Discrete-time_Fourier_transform. Hope that helps.
In my opinion, Fourier Transform actually allows to go from coefficient value representation to point wise representation. So in that context, doing calculations in that point representation form are possible through Discrete Fourier Transform(DFT); and an efficient way to perform DFT by utilizing the symmetry and periodicity is the Fast Fourier Transform(FFT). Feel free to correct me if I am wrong
Most videos or how people would approach introducing FFT would be by means of Fourier transforms and complex numbers. Including myself and i liked this way of explaining it since it was independent of knowing Fourier transforms or complex analysis to any significant amount. Very very neat way to expose people to FFT thru the problem of fast polynomial multiplication. Just so happened i came across your video and realized it was so close to what i was trying to do with Schönhage-Strassen algorithm and Fürer's_algorithm to get a Harvey and van der Hoeven algorithm O(nlogn). Still got some work to do but was a delight to see somebody creating a video so well on a similar problem. Keep it up great channel!
No dude he explained it , it is about the n'th roots of unity , they are found using the exp(i*x) formula that is itself defined as exp(i*x) = cos(x) + i sin(x) , exactly at 17:00
I think Mr Reducible was a little confused. Suppose f= 3x^2 + 2x +5, g= 4x +2. f(x)g(x)=12x^3 + 14x^2 + 24x +10. In Matlab or Octave, type to following commands to compare convolution in the time domain to using the fft and ifft: f=[5 2 3 0 0]'; g=[2 4 0 0 0]'; h1=conv(f,g) The output is: h1=[10 24 14 12 0 0 0 0 0]'. F=fft(f); G=fft(g); H=F.*G This is element by element multiplication, not convolution. h2=ifft(H) The output is h2=[10 24 14 12 0] In other words, you don't graph f, and graph g, and take samples. You perform the convolution directly on the coefficients. -The video is still great, and I never knew before watching this video that polynomial multiplication is a convolution.
I watched this video about 8 months ago and it was my first exposure to signal processing. As a result I’ve spent the past 8 months obsessing over the still unsolved problem of multiple fundamental frequency tracking/automatic music transcription. It has completely destroyed my social life and I still can’t put it down. This project started with math from the early 1800s and now everything I’m doing is based off articles published within the last few years, I’ve learned Julia and written many thousands of lines of code, and I’ve attempted almost every machine learning/signal processing technique known to man. I’ve put a PhD thesis level of work into this and the idea of stopping before I solve it is unfathomable to me, but at the same time so is the idea of actually solving it or at least beating the current (unsatisfactory) state of the art. Anyways, thanks for making this incredible video. It has completely derailed my life but holy hell has it been a fun ride so far.
For those of you who want to see me break down further how the FFT Implementation works: checkout out this video: ua-cam.com/video/Ty0JcR6Dvis/v-deo.html
Also a subtle mistake that a commenter made me aware of -- at 26:40 instead of replacing w with (1/n * e^{-2 * pi i/ n}), the actual right way to do this is by taking the final output of the IFFT at the end of the recursion and dividing by n. Sorry about that error, and in general all bugs like this one will be updated in the description of the video.
So the full change is w = e^{-2 pi i / n}
And then somewhere outside the scope of the IFFT function ifft_result = 1/n * IFFT()
As one additional side note, there seems to be a little confusion about the point by point multiplication at 6:31. Remember, all that's happening here is we are sampling points from polynomials A(x) and B(x) and using these points to find a set of points for C(x) = A(x)B(x). Every corresponding y coordinate for C(x) will be the y coordinate of A(x) multiplied by the y coordinate of B(x). Therefore, point by point multiplication really means "let's just keep the x-coordinates and multiply the y-coordinates."
:) first to reply to u
@@electronicmusicwave7459 Great question and the answer is that multiplication of polynomials is an indirect way to get to the DFT. The key idea is that one of the fastest ways to do polynomial multiplication is take coefficient representations of each polynomial, evaluate them at the appropriate number of roots of unity, multiply the y-coordinates in this new representation, and convert back to coefficient form. The DFT shows up in the evaluation step because evaluation of polynomials is just a matrix vector product where the matrix is the DFT. Remember, at its core, the FFT algorithm is just a fast way to multiply a vector by the DFT matrix -- polynomial multiplication just happens to be one case where the DFT shows up, so we can use the FFT.
The DFT also shows up in all sorts of other applications such as taking a time domain signal and converting it to a frequency domain representation, image compression algorithms, acoustics, etc. So there's nothing special about polynomial multiplication other than the fact that the fastest way to do it involved polynomial evaluation which has the DFT.
Was just about to comment the error but saw you already had it! Great video and really clear too!
@@Reducible Divide by 2, rather than by n, I think? Otherwise, in the recursions, we would divide by n/2, while calling on recursions that divide by n/4 themselves, and so on... Really like the video, by the way!
Great video. Now I got it to work. Maybe you could add a correction in the video though?
Back in 1972 when I was a young physics grad working in an optics lab, I built an FFT optical spectrometer using a Michaelson interferometer using a laser reference beam as a ruler to trigger my computer program as to when to take samples of the sample beam, using an appropriate set of A/D converters. I scanned the interferometer by advancing one leg with a fine screw and we took 16,384 samples, which made for a nice resolution. Written in Fortran for a Data General Nova with a 5 pass compiler, then hand optimized the assembly code to make it efficient enough to run in real time, displayed on a long retention phosphor oscilloscope as it scanned. I even made a hardware assisted program instruction to do the bit reversal and shift to speed it up.
That sounds so cool!
yes, further optimization by using photonics !
sounds like a nice experiment for undergrad students
I had to read the last sentence just to make sure this wasn't one of those overly complex "andddd I have no idea what I'm talking about. I just made all that up." Memes 🤣 don't mind me, smooth brain over here 😭
As a physics undergrad with some knowledge about FORTRAN and Assembly, and made in '72.... WOAH
I think the most impressive thing about this Fourier Transform related video is that it never mentions words "sine" or "frequency".
Fourier transform and fast Fourier transform are different
or integrals hell
@@owenpenning1597 but they gave similar results
Often seems like e^jt is the more fundanmental concept, cos and sin are just the real and imaginary parts of that
I am curious why a sin hater would even care about Fourier.
6:58 not going to lie, I spent an inordinate amount of time trying to figure out how it could possibly be O(n)! before realizing the exclamation was only punctuation, not a factorial
I was thinking as I saw that: "he shouldn't have used the exclamation point there."
LOL... i wonder, i think in the spanish fonts they have an exclamation that is flipped upside down :p
Exclamation ! math is hard....XD
spent a crazy amount at 17:30 because w^(n-1) (that would mean 15 points - 1 = 14/2=7; but that lacks one point to reach the opposite site) doesnt really work out when the first is 1 ... . suppose its 16 points divided by 2. then it works just fine :-p . or did i overlook some math context from before? has he uploaded a correct python code by now?
Exclamation ! well deserved
Years ago when I got to sound design, I was so excited that you can represent sound as both a waveform and a set of frequencies. I heard about this mysterious "Fourrier transform" that was responsible for that and the famous "FFT" algorithm that made it possible for computers to do things such as equalization quickly. I saw some videos on UA-cam that explained it but lacking the proper mathematical background, I couldn't wrap my head around it.
Now this video got to my recommended, with your excellent step-by-step explanation with actual examples. I still can't wrap my head around it.
You need to learn the FFT from an engineer's point of view. This is more math/algorithm than you likely need.
@@markuscwatson Even better, learn it from a music producer's point of view. Once you get a grip for what filters and EQ do to audio, thinking about signals in the frequency domain makes a LOT more sense.
As an idiot, I feel like a blind man at a demonstration of optical illusions watching this video. I can tell there are really cool things happening, I just can't see them. OTOH, the 3Blue1Brown videos on FFT were inspirational.
I really need to take some math classes. I love everything about audio. I like building speaker enclosures (physics!), I like coding (math!), I like building electronics, and I would love to do some DSP development. But when I start trying to grok the math behind all of that stuff, it feels like somebody took the numbers and four-function buttons off my calculator and now it's all Greek to me.
This guy has got to be joking if he seriously thinks we got any mileage in knowledge from this video.
He was just talking to himself
@@imlxh7126 The reason music relates so closely to frequency is that our ears are sensitive to frequency, but not to phase. A sine wave of a given frequency sounds almost exactly the same if it is delayed by a few milliseconds. Two sinewaves of different frequency sound the same if one is shifted relative to the other. Computers that recognize speech begin with frequency analysis.
13:00 for those of you who are wondering why he is plugging in the square of x_i. It's because when we use the coefficient form, if we just plug in x_i, it would be interpreted as powers 0,1,2,...., so if we plug in x^2 it would be interpreted as (x^2)^0. (x^2)^1, (x^2)^2, (x^2)^3, which are powers 0,2,4,6...which are even terms.
An aside: When you mention that any polynomial of degree N can be uniquely determined by N+1 points? That's how many forms of forward error correction work, suitably modified for modular arithmetic. If you want to send your message you break it up into N parts, each of which becomes a point on a polynomial of degree N-1. Then generate as many extra points on the polynomial as you need to generate a list of K points where K>N, and transmit those K points through your unreliable communications channel. As long as at least N of the K make it through to the other end, you can recover the original message.... and thus your phone call can keep on going even with a little radio interference taking place. You can tune K to be optimal for the reliability of your communications channel and the acceptable level of overhead.
You've just discovered a future video I'm planning on error correcting codes (Shh don't tell anyone :P).
@@Reducible Convolutional error correcting codes too, pls?
@@AJMansfield1 the ones that I was thinking about going through initially are Reed-Solomon codes and the Berlekamp Welsh algorithm. Both of those techniques rely on similar ideas with polynomials but over a finite field. Pretty cool stuff. Convolutional error correction codes are also pretty interesting so may think about addressing that as well. I have an insanely long list of video ideas that I'm just adding to one by one based on request and my interest so I can't make any promises :)
This is absolutely brilliant !
@id523a Can someone please explain this? I've implemented Diffie-Hellman, but I'm not familiar with Shamir's algorithm.
Who came here from 3blue1brown and fell in love with this channel?
That shoutout was awesome!
I didn’t know they are from 3blue1brown, but now that you mention it they remind me of one another
Me! This video wasn't quite as easy to follow as Grant's videos, it was a bit too fast paced, but this guy seems to just be starting out, and he's doing a great job so far!
Me too!
@@shadowthehedgehog2727 they are different but using the same tools
Just got recommended your channel, really like it! If you need any topic suggestions, I'd love to see a video about the disjoint set and union-find algorithm. That's one of my favorites, but it's not as well known. It even uses the inverse Ackermann function for the big-O analysis too which is super weird and interesting.
Hey, didn't expect this! I absolutely LOVE the work you do CodeParade! Thanks for leaving this comment! I also love the union find algorithm. The ideas that lead up to the ultimate optimization of path compression and the Ackermann function is also quite beautiful and definitely something I'll keep in mind!
@@Zcon18 which post ?
@@jaikumar848 the community post from a couple of days ago
I'd love to see a video on union-find. One of the most underrated data structures. Its so incredibly useful for graph problems.
@@jamesbalajan3850 absolutely
Holy crap! How has it already been 30 mins!?! You know it's a good video when you completely loose track of time!
Thank you so much for this comment! I love hearing comments like this one.
I didn't realize, that's amazing!
My reaction was instead "Holy crap! How has it already been 3 hours!?!" because I like to pause and think things through and just marvel at the beauty of it
This popped up in my recommendations. I have a bad habit of watching long, intricately detailed descriptions of concepts that I do not have the foundational education to understand. This is probably why every time I see a video like this that assumes I have some prior knowledge, I wonder how I ended up here.
The video itself, however, is spectacular. Very easy to follow, even with my lack of understanding. (I don't think I am describing this very well, but you'll have to trust me.) I work as a cable technician and have a very VERY basic understanding of RF signal processing theory. It's always interesting to peer into the depths of what keeps it all tied together.
I keep telling myself that one day I'll work up the nerve to pursue a more structured, formal education in these topics, but in the mean time I am glad that such accessible presentations exist out here in the wild.
Thank you so much for taking the time to write out such a nice comment! I try to make the videos as accessible as possible, but it can be tough especially when we are talking about something as advanced as algorithms to calculate fourier transforms. But I'm glad you were able to get something out of it!
I smell Grant's python library at work (oh, its credited in description, never mind)
I thought soo!!!
This is a great video. I love the close interconnection of serious mathematics and efficient programming. This didn't just help me better understand tricks for efficient algorithms (the log n trick is awesome), but also understand why the fourier transform is defined as it is. I mean, I understood the mathematics since before, but this gave so much intuition.
Some background knowledge defenitly helps here, I think, but I love the level. Please keep it up! Simpler concepts are covered pretty well elsewhere too, but this might be the best video on FFT on all of UA-cam by quite a bit (imo). Keep it up! See you when you've totally blown up :)
I’m just getting into programming and i love learning why things work the way they do. I feel like this channel is going to be very helpful to me
It won’t - watching this as an common programmer is like if as a worker you would watch how shovel is made in a forge. Nice to watch and know how it is done, but won’t do the shoveling 😀
I like the fact you actually laid out the pseudocode for the algorithm at the end of each section, most channels that cover topics like these usually leave it at the block diagram. Shows you really know your stuff.
WOW. I'm literally blown away. This video is EXCELLENT. PLEASE keep doing what you're doing! Instant share.
Thank you! Don't plan on stopping any time soon :)
Literally?
Very helpful video. However, I'd like to point out that @10:50, the decomposed functions should be P_e(x) and P_o(x) not P_e(x^2) and P_o(x^2) because once we do substitute in x^2, it should get us back to the original function. No one has pointed out this mistake so I figured I would because I was confused for a little bit.
For anyone has the same confusion - It's actually correct. You start with a function P(x), which in the example, is something to the x^5. You x is input, the function P(x) is something of x^5. To decompose, you split this x^5 into two parts, in the video example, it's x^2, x^4 and odd part is x^3 and x^5, except x^3 and x^5 can be expressed as x*x^2 and x*x^4. The two decomposed function, P_e and P_o, is a function of x, or you can think it as function of y, where y=x^2. So essentially, 2x^4+7x^2+1 is P_e(y) = 2y^2 + 7y + 1. So it's not a mistake, x^2 just mean P_e(y) will take input where the input is x^2 from the original function.
FYI - All the video content basically came from the book called "Algorithms". I think combining the book with the video makes a lot more sense...coming from somebody reading the book, don't understand a thing, had to watch the video, then video makes a high level sense but omits a lot of details so had to go back to book again.
@@jingyuanxu7272who is the author of this book? where can I find it?
@@jingyuanxu7272 Thanks for confusing us even more.
@@lbognini Hey, I just got confused by that as well. So maybe let me try to explain it the way I think I understood it:
So P_e(x^2) = 2x^2 + 7x + 1. Looks weird, but let's try to calculate a value of a position. Let's call that position "a".
If you now substitute your position a into the formula of P_e, you will notice, that it basically stands where x^2 was standing before: P_e(a) = P_e(x^2)
So a = x^2.
Now you can just calculate P_e(a) like normal: P_e(a) = 2a^2 + 7a + 1 but since a = x^2 we can write P_e(a) = 2(x^2)^2 + 7(x^2) + 1 = 2x^4+7x^2+1
Hope that helps! 😁
One of the absolute best math videos I ever watched. Only problem is I couldn't find the code for this particular video in the repo.
Apologies, I made a rookie mistake and forgot to do that before uploading this video -- just pushed the code for this video.
Thank you kindly. Keep up the amazing work!
Agreed. Felt like a masterclass. I wish I had this video back in college when I covered the material.
You can find the code (with explanation too) over here
cp-algorithms.com/algebra/fft.html
@@shantanurahman5084
Thanks a lot. I meant the code for the animation, not that of the FFT.
This was incredible! 2.5 hours of the lecture in my university explained flawlessly in 28 minutes!!
The best way to understand the FFT algorithm is to do it several times for different signal types by hand, starting off with simple signals. Get the equations, draw the diagrams, and then get the outputs and even write a script to do it if you're inclined. Sometimes you just have to follow the rules and steps, and trust that it will all make sense over time!
I used to teach this, and I always found a concrete example was helpful to students. While FFT applies anywhere, we were working with time based signals and extracting the frequencies. While you showed that at the start, I feel you should finish with that. The classic teaching approach of telling the class where you will be taking them, taking them there, and summing up by revisiting where they've been.
Dude, you uploaded this video like two minutes before I searched for a video on how FFT works. Not even subbed but this was one of the first videos to pop up. That was a ridiculously perfect timing!
Also, I'm getting strong 3blue1brown vibes from your channel, subscribed right away!
Haha that's awesome! Always a great time to learn about the FFT!
I never knew Applejack's brother was into algorithm theory!! Awesome :^)
I think that has something to do with polynomial multiplication or something, ehhhemmm...
Honestly among my favorite youtube videos ever created. I've never felt such an urge to like a video. Every time I'd come back, I'd be disappointed that I've already liked the video and can't do it again.
this is a ridiculously awesome amount of work, possibly hundreds of times as long as the video. it hurts to think this will make less money than someone doing a makeup tutorial
the creator of these animations has a super loaded patreon page and other stuff. he is doing great!
No reason to shit on other creators though. A lot of experience and planning can go into a makeup tutorial too.
I just read about the FFT from CLRS, where the authors directly went into choice of evaluation points as complex roots of unity. But your intuition and the way you deduced that is brilliant! Thanks man!
I’ve been curious about FFT for a while and has some rough ideas about its applications, but how it works is still a black box to me. Thanks a lot for making such useful educational content - this is the best explanation on FFT I’ve seen so far, oh and great video editing as well. I’m sure this will stick with me, and even if I need some recap I’ll go back to this video. Keep up the good work!
I'm a software developer with a background in sound engineering... so I'm use to peering into "black boxes" only to find the math is way over my head... I just trust the magic, LOL. THIS video was about to be more of the same were it not for the final wrap up at the end - WOW! Well done, sir. Well done!
26:44 "So, if your mind isn't blown, you haven't been paying attention." Not gonna lie, you had me in the first half.
ive parked on this topic for the past 9 weeks, and only now thanks to this incredible video the pieces are finally coming to fall into place! Thank you so much!
One of the first videos on UA-cam I’ve seen that intuitively implements a complex mathematical idea in code. I appreciate how must respect you have for your viewers by showing them the steps to solve this problem without “baby talk”. Absolutely fantastic work!
Also, I am aware of 3B1B, Primer, etc. Those channels are also fantastic but I enjoyed how this video actually showed the code implemented from scratch.
Thank you so much for this awesome comment!
You deserve all the compliments you’re reading. I’m fairly familiar with this topic and you did a great job explaining it, easily better than how it was first explained to me.
i had this exact algorithm in a numerical programming class in university and my professor did a fairly poor job at explaining what FFT and IFFT really are - he just showed us the algorithm and how its recursive call works.
this video makes me truly appreciate how ingenious this algorithm actually is, and knowing where the algorithm comes from makes understanding it a lot easier.
thank you for this amazing explanation!
I'm here from 3b1b: they just kinda dropped fft on us in intro to EE.
Common assumption in engineering courses is that fft is a magic box that's very usefull but the students don't need to know what's inside
@@igornoga5362 Common, but probably very unfortunate. Too many students will spend too much time trying to figure out how the magic box works that they lose focus on the rest of the lesson. It's hard for engineering students (and humans in general) to totally compartmentalize "oh, just pretend it works for now".
This might be one of the first videos other than travel vlogs where I fell in love with it and subscribed. FFT had been a nightmare, everytime I started learning it mind would just wander off. this video is soooooo good. Thank you for making this. And being beautiful ... you have added beauty to the FFT algorithm.
I think 3Blue1Brown is very happy to see that his library is put to a good use.
You even addopted his style of presentation. :)
Nice work.
This is the most elegant way to explain FFT I have ever encountered. Brilliant!
Mind-blowing... Never do I think of FFT when speaking about polynomial multiplication, nor do I ever see so clear the core of the algorithm. My first encounter with FFT is horrible, with the overwhelming notations making me dizzy.
This channel absolutely deserves more views.
The multiplication is basically convolution in frequency domain.
Which is super fast due to being just a piece-wise multiplication of frequencies' magnitudes and an add operation on the phases. Where the magnitude and phase corresponds to the polar representation of the complex coefficients (FFT outputs in cartesian coordinates, but can be transformed in polar coordinates for a better understanding).
The fact that we can perform convolution in spectral domain as a simple piece-wise multiplication is super-important, as convolution done in temporal domain implies every term of one signal to be multiplied with every term of the other signal, resulting in a complexity of O(n^2), which is most of the time impractical in some situations.
By taking the signals in spectral domain by performing FFT on both, the convolution's complexity is just O(n)
I was hopeless at maths at uni and even I could follow this. Fantastic explanation!
I couldn't fully follow this :(
I had Linear algebra at uni tho...
I still cant :(
i had to stop after 20 minutes because my mind was overloaded...
ok i remember being blown away the first time i learned about fft and i only saw it only the context of signal processing. this is truly amazing. i would watch the big O notation if i didn't just have a month doing so already this semester.
your video was amazing and i am really glad i came across it. really inspiring keep up the good work!!!
This is insanely good work. I wish I had a better sense of how Euler’s identity, complex numbers, polynomials, and the cycle of differentiation and integration all relate, because everything seems to boil down to these ideas at the heart of mathematics.
Add linear algebra to the list!
(And Generating Functions - I think there is a 3blue1brown video about them.)
This is single-handedly the best explanation on the topic I have ever seen. Perfect for college students!!
This is a fantastic video! Feels very much like the style of 3b1b- not just bc of the animation, but also bc you have a similar way of carefully yet thoroughly explaining complex ideas. Well done!
It's one of those videos which blows your mind... Suddenly, and for good
Great video!!
I learned FFT in the classical context, but this is amazing! (Thanks 3b1b for suggesting 😄)
Just one detail, I feel we cannot apply the 1/n directly to omega (if the inverse is right). Because 1/n is applied to the 1 values on the matrix as well as the omega is raised to some power that would affect the 1/n too. =)
Yeah another astute commenter like yourself noticed this, I updated my pinned comment with this note to address this. Sorry about that!
Surely, one of the best explanations of FFT I have ever listened to.
This... this is a perfect example of FM technology.
this is way over my head but I appreciate things being talked about in terms of computability not just in the abstract
I really wanted to understand FFT, so I took the time to thoroughly watch 3Blue1Brown's, Veritasium's, and your video on the topic, and your approach helped me the most! Maybe because I watched it last, but still, you did a phenomenal job. Well done!
same man.
Beautifully described and explained. This is the Best video on FFT on youtube which describes the underlying concepts of FFT.
This guy was my TA at Cal ❤️❤️
Go bears! :)
John Wick is at caltech now?
Thank you so much for the beautiful arguments you used to show the magnificence of the FFT algorithm!
I am glad 3b1b gave you a shoutout so that I got to see this!
very nice video, I especially liked seeing the actual code implementation and of course the smooth animations :)
Did 3b1b? Seems like I missed that. Where did he give that shoutout?
@@JoJoModding community post
@@sainathsingineedi2922 Patreon?
@@JoJoModding ua-cam.com/video/g8RkArhtCc4/v-deo.html
FFT made me fall in love with engineering but in my humble opinion your vid didn't quite get out there enough with it. Visualizing how it applies to real world applications(signals specifically which covers pretty much everything in the world) would've made it much more interesting because it's truly amazing how much FFT can solve complex problems that would be extremely difficult and time consuming to solve without applying it. Fourier transform is a relic of art not many get to truly experience. My perception of the world got slightly better after learning it and it all became so fascinating... God bless you Fourier
I don't even have a layman's understanding of complex mathematics, but I was very pleased to be along for the ride, just out of curiosity. Beautiful presentation and I hope to be able to revise and understand more one day.
For the context, the reason that we can compute polynomial multiplication efficiently with FFT is based on 2 facts.
1. multiplication of polynomial in coefficient representation is a *convolution*. From wikipedia (en.wikipedia.org/wiki/Convolution), "The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of two polynomials, then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences."
2. The product of 2 sequence in frequency space (the representation you get after applying Fourier transform) is the frequency space representation of the convolution of the original sequence (in time space), or (f * g)^ = f^g^. Here f, g are 2 functions, * is the convolution operator, and ^ means the frequency space representation (or apply Fourier Transform)
Absolutely beautiful, I always wanted to understand as to why FFT works!
The FFT has always been presented to me as a thing that can map vector convolution to vector multiplication element-wise, and now it looks astonishingly clear why. The trick is, the convolution (or as I sometimes call it, long multiplication without carrying) is *exactly* what you should do in order to multiply the polynomials in coefficient representation, therefore, you can apply it anywhere where convolution takes place.
Thank you for this video!
wow man, "A genius is one who makes complex ideas to simple not simple ideas to complex" this surely applies to you. keep doing this good work.
Polynomial multiplication is just a discrete convolution, and Fourier Transform is a known way to perform convolution.
Yea for signals convolution in the original time domain is the same as multiplication in the frequency domain. This is exactly analogous to convolution of the coefficient representation (using the distributive law) being the same as pointwise multiplication of the value representation. This is the convolution theorem.
Excellent. I was looking for info on the NTT and iNTT over finite fields, and this is a perfect explanation that doesn't incorporate any of the "motivating" nonsense about waves or rotations that aren't relevant for my particular situation!
This is such an amazingly lucid visual explanation of Fourier! I think Laplace, himself would almost be proud, if not infuriated. So much to visualize, but so little time! I mean frequency…
infourierted
Wow just wow, this has gotta be the best video of the year for me. Absolutely astounding work!
My mind wasn't blown even once but I didn't arm it first. I'll rewatch this when I have read up on roots of unity and DFT matrices. I did enjoy the content even though it was currently way over my head ^^
So great that 3b1b just gave this video a shoutout, it's super underrated!
Hell yeah
You can never tell how UA-cam search algorithm leads you to the right place
It uses FFT
This is one of the best explanations I have ever seen. Thank you
I’m going to watch this until i understand it
I was about to comment about the voice's EQ and compression, then I saw the video date and checked your newest video. Congrats on the progress, much more understandable!
Thanks so much for this comment! Audio was definitely one of those parts of the channel that was admittedly quite weak in the early stages. Got a new microphone, learned a lot more about doing voice overs properly, and it's nice to see that someone notices :) Usually, the gauge for whether it's good is that no one complains :P
@@Reducible The classic work of behind-the-scenes engineers: when everything is built meticulously no one notices, get one thing wrong and it's top headline.
I'm glad you liked my small token of appreciation! :)
I was blown by the hability to make me lose track of the topic faster than any teacher, engineer and mathematician combined.
sarcasm?
This video totally blows my mind and makes me regret not going to science school for the first time. Thank you so much for the great video.
"We'll take a look at something you're all familiar with. Polynomial multiplication."
1:53 minutes in and you already lost me lmao
bruh
We've gone too deep 😂
finally got a deep understanding of why FFT's magic works! thanks for helping me pass my course
Man, this is one of the best videos I’ve ever seen! You are very clear and keep it very interesting. Serious props to you, great job!
When I was an undergrad in Computer Engineering, I had coincidentally taken an Algorithms class and a Signal Processing class in the same semester. During one of our DSP lectures, our professor made a throwaway comment saying, "Oh if you know what merge sort is, you already know how to do FFTs." That one sentence blew my mind so hard that no joke I had an existential crisis for a week. Thinking about it continues to give me goosebumps to this day.
It's awesome to see 3b1b inspiring great new channels like Reducible. Fantastic material thus far and I'm truly excited for what's to come. Well done!
spent 6 hours figuring this out, and it blows my mind! What a great algorithm! This Fourier transform has a lot of applications, mostly prominent in signal processing. An ingenious algorithm to the core.
Really neat video! Speaking from experience, this must have taken forever to make haha. Brilliantly explained and produced :)
I just watched some of your videos! Man, you are going to explode -- very well done. I can definitely learn a few things from you :) Keep at it!
This is possibly the best explanation of the FFT and IFFT on UA-cam.
13:30 Thats it i'm lost. Good video man! Will have to review from top.
batner Same Here
So brillant description of FFT algorithm. Great job man.
Amazing! I somehow missed this part of signal processing course so was using FFT as a black wonderbox almost everyday. Now I have a clear view on how it's acually working. Thanks a lot, man! Your work is highly appreciated. Special thanks for using Python for code representation.
Incredible the way you explain such a complex topic. I could follow all of the details.
Great job!
The video is really great and I understood it very well but what I don't understood is how this all relates to time domain vs frequency domain. The FFT transforms a signal from time domain to frequency domain but here it transforms a polygon from coefficient domain to value domain? Aren't these two entirely different things? How is coefficient domain the same as time domain and value domain the same as frequency domain?
I have the same question in my mind
The initial coefficients represent an equation that maps across the time domain (X is understood as time), and the resulting coefficients represent an equation which maps across the value domain (X is understood as value/frequency).
Love how simply he went through without really skipping anything which will make it difficult to follow
holy crap wow
this video really made things click for me
i've never gotten goosebumps from understanding something before
but i just did
Sir, you've explained this topic really clearly! And luckily I've seen your video 3days before my exam. Thank you sooo much!!
I would be interested to learn why the FFT circuit resembles a benes network.
That was a connection I did not realize, doesn't seem like there's much literature about such a connection, but definitely is interesting. Thanks for teaching me something!
Ben at a high conceptual level all the coefficients of the two polynomials being multiplied will need some connectivity to each other. This is just like needing to have a circuit path between all pairs that need to communicate in say a telephone network.
@@PhillipNordwall The fast Walsh Hadamard transform is better for connectivity.
Incredible! I've known of FFT for decades but didn't realise the underlying elegance.
In order to understand this explanation of FFT you first have to understand FFT.
Your way of explaining is really Awesome....pls keep making such videos...!!!
It's soul crushing for mathematicians, but to truly appreciate the glorious beauty of the FFT, you really need to be an electrical engineer. 😁
I've found a lot of applications for the FFT over the years. Stochastics, statistics, machine learning, music, audio engineering, and so much more. As much as its elegance and ubiquity is stated in the video, it still somehow doesn't seem to do the algorithm justice. It is baked into everything.
@@keris3920 - Great examples. A software defined radio (SDR) can replace a heap of components used to process the radio frequency signal with the FFT, and then replace a lot more components used to process the resulting audio. I get the impression that a more computationally capable human brain could listen to the radio by simply thinking of the FFT, without needing any electronic components.
@@Liberty4Ever in audio engineering, it's almost paramount to do a lot of frequency-domain work in software using FFTs and wavelets. If you record something that's modified by the hardware, there is no going back and fixing it later. Music in general has a place for both HW and SW implementations of the FFT and frequency-domain modifications in general, so it is a great example (even if the average Joe doesn't have to consider the FFT when listening to music, someone has used an FFT to assist in almost every recorded work of art to date).
Loved this video - had to pause a few times but the explanation really clicked
I never went into understanding FFT, because it always seemed so scary to me - even after I understood the Fourier transformation itself. But holy math, this explanation is brilliant! The animations are so smooth (3B1B did a really amazing job with his module!), and everything is clean and understandable. The length of the final code is completely mind blowing!
I got so hyped I wrote my own Python implementation, which contains the entire polynomial multiplication, using the FFT implementation you provided (with a few minor modifications). So here it is!
Notes:
- in FFTCore(), sign decides whenever we use FFT (sign=1), or IFFT(sign=-1)
- using list comprehensions instead of for cycles
- expanding the polynomials so we always operate on length of powers of 2
- pretty printing the result (the limitations of float values bring near-zero but non-zero imaginary components after IFFT)
import math, cmath
def FFTCore(P, sign):
n = len(P)
if n == 1:
return P
ye, yo = FFTCore(P[::2], sign), FFTCore(P[1::2], sign)
w = [cmath.exp(sign*2*cmath.pi*1j*i/n) for i in range(n//2)]
y1 = [ye[j] + w[j]*yo[j] for j in range(n//2)]
y2 = [ye[j] - w[j]*yo[j] for j in range(n//2)]
return y1 + y2 # concatenation
def FFT(P, inverse=False):
n = len(P)
assert((n & (n-1) == 0) and n != 0), "Length of P must be a power of 2"
if inverse:
return list(map(lambda x: x/n, FFTCore(P, -1)))
return FFTCore(P, 1)
def polyMul(A, B):
lenA, lenB = len(A), len(B)
lenC = pow(2, math.ceil(math.log2(lenA+lenB)))
A, B = A + [0]*(lenC-lenA), B + [0]*(lenC-lenB)
Af, Bf = FFT(A), FFT(B)
Cf = [Af[i] * Bf[i] for i in range(lenC)]
return FFT(Cf, inverse=True)
def prettyPrint(C, lenC):
res = [f"{x.real:0.2f}" if cmath.isclose(x.imag, 0.0, abs_tol=0.0001) else f"{x.real:0.2f}{['+',''][x.imag < 0]}{x.imag:0.2f}j" for x in C]
print(', '.join(res[:lenC-1]))
A = [2, 3, 1]
B = [1, 0, 2]
C = polyMul(A, B)
prettyPrint(C, len(A)+len(B))
I wish this video existed when i chewed through that topic at university.
Great explanation and demonstration. Thx
Minute 6:30: how do you multiply points? I'm trying to figure out the rule and I'm not getting it.
By the way: subscribed before finishing the video. Seems AMAZING!
By multiplying points, all that's really happening is multiplying the y-values and keeping the x-value the same. Sorry if that was confusing!
Thanks for asking this. And thanks for the answer too.I was also confused there.
Excellent video.
🤦🏻♂️🤦🏻♂️🤦🏻♂️ MANY THANKS! Now that you said it, it is obvious. I think it was the sentence "multiplying points" that got me confused. Thanks again!
Ah! Glad this was asked and answered, that was a showstopper for me, so I went hunting in the comments :-)
Thanks for asking. I came here to ask the same thing.
This is the english explanation that I have been looking for. God bless you
Why Fourier, though? I don't see a connection to fourier transform yet.
Yet.
The Fourier Transform is an integration from -∞ to +∞ of a continuous function. Note the integrand (kernel) of the Fourier transform. The Discrete Time Fourier Transform (DTFT) is still continuous with the summation from -∞ to +∞ (continue to note the kernel). The Discrete Fourier Transform (DFT) samples the DTFT into N samples. The Fast Fourier Transform (FFT) is an optimization of the DFT that uses N (power of 2) samples as shown in the video above. This article has links to all of the above: en.m.wikipedia.org/wiki/Discrete-time_Fourier_transform. Hope that helps.
In my opinion, Fourier Transform actually allows to go from coefficient value representation to point wise representation. So in that context, doing calculations in that point representation form are possible through Discrete Fourier Transform(DFT); and an efficient way to perform DFT by utilizing the symmetry and periodicity is the Fast Fourier Transform(FFT).
Feel free to correct me if I am wrong
@@AhsanAli-fr4ok The pointwise representation would be what is meant by Discrete in this case, so I believe you are correct here.
Most videos or how people would approach introducing FFT would be by means of Fourier transforms and complex numbers. Including myself and i liked this way of explaining it since it was independent of knowing Fourier transforms or complex analysis to any significant amount. Very very neat way to expose people to FFT thru the problem of fast polynomial multiplication. Just so happened i came across your video and realized it was so close to what i was trying to do with Schönhage-Strassen algorithm and Fürer's_algorithm to get a Harvey and van der Hoeven algorithm O(nlogn). Still got some work to do but was a delight to see somebody creating a video so well on a similar problem. Keep it up great channel!
Excellent video. However you do not explain why multiplying two polynomials has something to do with the Fourier serie (sum of sine and cosine waves)
No dude he explained it , it is about the n'th roots of unity , they are found using the exp(i*x) formula that is itself defined as exp(i*x) = cos(x) + i sin(x) , exactly at 17:00
multiplying two polynomials is a convolution of their coefficients
a convolution in the time domain is a multiplication in the frequency domain
I think Mr Reducible was a little confused. Suppose f= 3x^2 + 2x +5, g= 4x +2. f(x)g(x)=12x^3 + 14x^2 + 24x +10. In Matlab or Octave, type to following commands to compare convolution in the time domain to using the fft and ifft: f=[5 2 3 0 0]'; g=[2 4 0 0 0]'; h1=conv(f,g) The output is: h1=[10 24 14 12 0 0 0 0 0]'. F=fft(f); G=fft(g); H=F.*G This is element by element multiplication, not convolution. h2=ifft(H) The output is h2=[10 24 14 12 0] In other words, you don't graph f, and graph g, and take samples. You perform the convolution directly on the coefficients. -The video is still great, and I never knew before watching this video that polynomial multiplication is a convolution.
An absolute beauty...
Visualization at its best..
Hats off from the bottom of my 💓
I watched this video about 8 months ago and it was my first exposure to signal processing. As a result I’ve spent the past 8 months obsessing over the still unsolved problem of multiple fundamental frequency tracking/automatic music transcription.
It has completely destroyed my social life and I still can’t put it down. This project started with math from the early 1800s and now everything I’m doing is based off articles published within the last few years, I’ve learned Julia and written many thousands of lines of code, and I’ve attempted almost every machine learning/signal processing technique known to man.
I’ve put a PhD thesis level of work into this and the idea of stopping before I solve it is unfathomable to me, but at the same time so is the idea of actually solving it or at least beating the current (unsatisfactory) state of the art.
Anyways, thanks for making this incredible video. It has completely derailed my life but holy hell has it been a fun ride so far.
bruh...you gotta chill
how's it going?