People should be more grateful that such amazing videos exist for free, irregardless of who makes them and what library they use. Most of us are happy to pay a small fortune for college lectures of far inferior quality. The fact that these animation-styled videos are on this for free is a gift. Least one can do is be nice and say thank you to the creator... "More original aesthetic" tell that to the colleges that use the same old power-point presentations for all their lectures...
I remember studying the FFT in college (some 50 years ago) and I used it throughout my career. I’m now retired, but I still love to watch videos on Maths. Yours are particularly clear; in fact, they are works of art. Excellent!
@whitewalker608 GPS, data compression (MP3, MP4, JPEG), it's the barebone for (any) control engineering tasks, you can multiply polynomials and large numbers easy which is used all over the place in crypo, and seismic detection. Seismic detection was the initial driver, as people wanted to detect if anybody on the world was testing nuclear bombs or similar. The same technic is used on earthquakes and tsunamis, and you can also use it in predictive maintenance of machines that can not fail.
What a lovely set of videos! it helped me refresh my memories from Signal Processing classes. Would have loved to see some butterfly diagrams though, they are very beautiful to look at and quite useful to visually understand the process.
Yeah, I actually had an explanation for it in the original script but I had to cut it out because the video was getting way too long with it included. I think a followup video to the FFT video is something that I will put on my list of TODO's and in that video, I may cover the butterfly diagram and also another perspective on DFT/FFT from a signal processing angle.
I have been mulling over the FFT for WEEKS. Finally I have understood it, just by seeing that for 4 elements, the number of multiplications we did was 2*2+4 = 8 = 4 log2(4), compared to 4^2 = 16 for the DFT. For 8, it would be 4*2+2*4+8 = 24 = 8log2(8), compared to 8^2 = 16. I have an exam on this this Monday, so thanks to you if i ace the FFT question!
Amazing explanation and illustration!!! There seems to be a small typo at around 2:09: P_e (x^2) -> P_e(x) and similarly for P_o. Keep up the awesome work!
I'd like to add a very interesting point. While working with discrete signals, it common to use Z- transform. It leads a vector to a polynom in the variable z. The interesting point is that a polynomial multiplication is the convolution of its coefficients. The discrete convolution can be performed with a "smart" internal product. "Smart" because of size of results, where the product starts and finishes for each and a back to front flip. I'd be glad with a Z transform video. :)
Could you do an example inverse FFT? Using the algorithm from the other video I can't get it to work. I think the 1/n which was a factor of the alternative omega is supposed to just be multiplied by the entire final product, but I'm sure I'm just missing something here because every source I check leaves the formula the same way. Really love the content, reminiscient of 3blue1brown how you can make advanced topics so easy to understand. Edit: Checked another source and I'm right, checked on the other video and it's fixed in the description. Should have checked there first I guess!
I was going to in the original video, but I cut it out because the video was already pretty long -- I may add it into another video on the FFT/DFT if people are interested.
Really nice! Although when I read the title I thought this would be about turning the recursive algorithm into an iterative version. Which is something I've just spent the afternoon doing, it was interesting. (Maybe I should just look up the "advanced" algorithm for any n)
Your videos are absolutely amazing. I did some of this in college so it's great to see this information presented with such clarity using clear language and modern techniques. Specifically, though... I can't understand the link between FFT and what I understand as a Fourier Transform. For a normal Fourier transform, you're translating a time-domain signal into a frequency domain signal, or vice-versa. But with FFT you're translating between value domain (which could be time-domain) and a polynomial. What's the link between polynomials and sinusoidal signals? How can this function compute a discrete Fourier Transform? edit: It's on the tip of my brain. I can see how using very high order polynomials you're going round the unity root circle in a similar way you'd do for a Fourier Transform, but I still can't get the link.
Think of the convolution theorem, convolution in time domain is multiplication in freq domain. Convolution of the coeff representation (corresponding to polynomial multiplication) is the same as pointwise multiplication in the value representation.
Your videos are very helpful, and I started feeling like I am ready to solve one programming problem related to FFT over rings modulo p. However when I started implementing stuff, I realized I am missing some crucial part. If FFT(polynomial) returns n values for power n polynomial, how we can compute poly1*poly2, if we have only n points? We need 2*n points.
I checked other implementations, and they are computing poly1 and poly2 for 2n points. After multiplication we have 2n points and can calculate 2n coefficients.
I think this video is just describing FFT, not necessarily poly1 * poly2. For that, we have to evaluate a value representation that will be of length 2n + 1. Furthermore, that needs to extended to the nearest power of 2.
@@johnfazzie3384 You can add dummy 0 coefficients and it will work just fine. In fact, if the number of actual coefficients isn't a power of 2 you _have_ to do so. For example, for a 4th degree polynomial you have 5 coefficients, you have to add at least 3 more to get 8 in order for this FFT implementation to work accordingly.
In the previous video on FFT you mentionned early the relation to time domain and frequency domain. The time signals we feed our FFT algorithm with are however not polynomials. Could you explain why this works not only for polynomial value pairs, but for time/frequency signals too? In time and frequency domain there greek letter omega is often used as circular frequency which is a real number in practial cases. Is there any link between these two omegas? What is the importance of frequency to this algorithm?
Amazing explanation and visualization! Thank you for another awesome video! A small question. It seems that roots of unity were used to perform fewer calculations and use the repeating values for symmetrical pairs. However, is the code the time-consuming expression omega^j*y_o[j] still appears twice, so we don't benefit from using roots of unity as an input. Was it done to keep the code clear or did I get it completely wrong?
Good question. The way the code is written actually is fine. The key idea here is that instead of performing n multiplications n times as we do in standard polynomial evaluation, we perform the operation recursively instead. You are right that each recursive call take n operations to compute. But the key idea is that at every level of the recursion for both odd and even degree polynomials, the polynomials have half the degree of the original polynomial. This fact makes the entire algorithm run in O(n log n). If you are familiar with recurrence relations, the formal recurrence relation here is T(n) = 2T(n / 2) + O(n) which when you solve it out, gives you O(n log n). Another way to see what's going on that may be more intuitive is drawing a tree like diagram of the recursion and amount of operations in each call. Generalizing that diagram, you will see it's still O(n log n).
I know this is from 8 months ago, but I thought it was worth adding to Reducible's reply - yes, assuming no compiler optimisations, the code here is slightly inefficient as omega^j*y_o[j] is repeated for both lines inside the for loop, when you could store the value of that in a temporary variable and use that twice instead. However, being able to make that optimisation is not the main benefit of using roots of unity, it is just a nice bonus - the time complexity of the algorithm stays the same regardless, you are just speeding up the calculation by a linear factor (as Reducible explained - the main benefit of using roots of unity here is how they allow you to split the problem recursively and therefore obtain O(n log(n)) time complexity). Indeed, in this case, the actual speed-up here would be very small - omega is only ever raised to an integer power which can be done very efficiently and then you have a single multiplication which can be done even faster. Also note that in many programming languages, the compiler will detect a repeated calculation where the input value is unchanged and will optimise it automatically, so I wouldn't always worry about tiny speed-ups like this unless you actually notice a performance cost after profiling your code.
5:00 I found out that the values you get from that loop is in pattern...for at least those two example and that is : note:- I am a lot younger than you would expect so i don't get a most of things in the video i am just watching for little prior knowledge when i actually need to learn it properly lets say those variables as FFT[ x, y ] lets say those values you get after the loop as [ a, b ] the values after the loops for a is a = x + y (initially I noticed it this way y = a - x) the values after the loops for b is b = x - y (initially I noticed it this way x = y + b) That's it I don't know if it is actually correct and I can't check too because I don't know how to so if you can then check if it's pattern is wright?
This is correct. When our input consists of two numbers X and Y, we are essentially evaluating the function f(t) = X + Yt in the points 1 and - 1. So for 1 we get f(1)=X+Y and f(-1)=X-Y.
I actually more confused after watching this video than i was from the previous one. The only question i had was how this thing ensures there are no complex coeficients after IFFT is done. But now it is so much more))) So based of assumtion that you are multiplying two polynomials of highest power x3, your final polinomial would have highest power of x6 thus n has to be 8? Full video with two FFTs one multiplication and one IFFT would be so much appreciated.
Let's say the polynomials have degree n1 and n2. The resulting polynomial has degree n3 = n1+n2. You find n = the smallest power of 2 strictly above n3. When representing as coefficients, you make both polynomials start out with their coefficients then pad out with zeros until you have exactly n coefficients. Let's take the polynomials x^2+2x+1 and x^2+5. The degree of the polynomials are 2 and 2, thus the resulting degree is 4. You want n=8. (you cannot use 4 as you need strictly greater, so 8 is the smallest; you can do 16 or 32 but why would you?) Then the polynomials are represented in coefficient form as [1, 2, 1, 0, 0, 0, 0, 0] and [5, 0, 1, 0, 0, 0, 0, 0]. You do FFT on each, you multiply pointwise, you do IFFT on the result and are done.
Great video , it was very helpful ! Also can you please tell us what are the changes that could be made to the code to convert it into an implementation of split radix fft ?
So, when we plug in the coefficients for a polynomial into the FFT, we get a set of values that this polynomial takes on. Got it. So, how does this have anything to do with plugging in values of a periodic function and getting the coefficients of complex exponential functions? (I'm in third semester electrical engineering, I've had my Algorithms and Data Structures course last semester and didn't understand the FFT. Now I'm learning about all the Fourier stuff in Calculus 3 and Signal theory. I would love to get a deeper understanding, because I find Fourier analysis fascinating!)
Thanks for this comment and there is a really interesting connection. I wanted to include it in the video, but it was already pretty long so I had to cut it out. Might make a separate video on it. So the signal processing FFT is often taught with respect to taking a time domain signal and finding the coefficients of fundamental frequencies (which can be represented by complex sinusoids). In the signal processing FFT, you will actually see the DFT matrix defined slightly differently: en.wikipedia.org/wiki/DFT_matrix where w = e^-2pi /n. This is just a sign convention so either definition is fine as long as the DFT and inverse DFT are appropriately defined. But one interpretation of this that relates to this particular story with polynomials is similar to how we use the IFFT with e^-2pi /n to calculate the original coefficients of our polynomial from sampled roots of unity points, the FFT in signal processing contexts attempts to find the coefficients of basis vectors that represent complex exponential functions of fundamental frequencies. Obviously there's a lot more details here that I can't hope to do justice in a single comment, but I hope that gives you some insight. Again, this might be another video since a lot of folks seem to be interested.
But how does this connect to sample values!!!!?? How does this example correlate to actually converting from time to frequency??? Is the input just you sample values over a slice of time?
His approach of viewing the FFT is a lot different than is normally taught in DSP classes so you might not immediately see the connection with frequencies. You can still see relation to frequencies by looking at the spacing between the values on the unit circle. Imagine going around the unit circle by stepping through the sample points: The more closely the values are spaced together the more steps it takes you to complete a full turn (therefore low frequency). High frequencies correspond to values that are more widely spaced apart. So, each set of sample points distributed around the unit circle corresponds to a base function in the DFT. I like to think about the DFT as a set of dot products between the signal and each base function (so conceptually it is in no way different than any other linear coordinate transform). And the FFT is a way to avoid explicitly computing every single dot product.
@@compuholic82 right. the dot product part is a bit interesting. never thought of it that way I did end up trying to code this up. in the video it looks like python, but there were some things in his code that did not at all looks like valid python code. So I had to do some reinterpreting what was going on. And I managed to get it working, although the output for some reason looks like it mirrored. like the elements past "n/2" were the same as the elements before "n/2" but flipped. Idk if that's a bug or just a part of how it works? Also I find it kinda bizarre to use imaginary numbers. For a long time I never really understood it, but recently realized that imaginary numbers can just be thought of as 2D vectors. And the whole "e^(2*pi*i/2)" crap is actually just a cos,sin vector in disguise, which I would have never realised unless I dug deeper. And it would seem like it would be more sensible to view the algorithm in this way.
@@Andrew90046zero From your description it is not entirely clear what you were trying to do. But the "mirrored" elements may be entirely correct. An N-point DFT of any real-valued signal will only result in N/2+1 independent coefficients with an even symmetry. The DFT of a purely imaginary input signal will lead to an odd symmetry. And it's very common to use complex numbers here. Complex numbers really shine in DSP applications because manipulating exponential functions is a lot easier than working with trigonometric functions.
@@compuholic82 well I aim to use this in my own applications so I can detect patterns or any other type of situation where I need to break down a signal. and I was just unsure if there was a bug in my code because the output array has mirrored values.
Also u can think of convolution theorem. Convolve in time domain is multiply in freq domain. Convolve poly coefficients is same as multiply out value representation.
Why don't you just square the output of the cosine wave and multiply the sine wave output with the cosine output, remove the DC offset and normalise to get a sine-wave and a cosine wave of twice the frequency?
Hi Reducible, How to programmatically compute the value of Ѡ? In ur example, u hav used hard-coded values [1, i, -1, -i] when n=4, and [i, -1] when n=2. What if n=8, or a much bigger value? How to then compute Ѡ = eˆ(2∏i/n)? Or, is there a constant value for eˆi ? Thanks!
Hi Reducible, Ѡ = e^(2∏i/N) I think Ѡ can be implemented as cosΘ + i*sinΘ, where Θ = 2∏/N The java object (if implementing in Java, for example) can handle the real part cosΘ and imaginary part sinΘ separately example : com1 = x1 + iy1 com2 = x2 + iy2 com1 + com2 = (x1+x2) + I * (y1+y2) com1 * com2 = (x1x2 - y1y2) + i * (x1y2 + x2y1) Makes sense?
@@77tigers26 i know that, but if using the fft on the coefficient vector gives us the points evaluated at the roots of 1, what would passing the coefficient vector into the ifft physically represent
Any chance you'll start mirroring you videos on alternative sites like Bitchute? With youtube increasing censorship, it would be nice to be able to view and support you, without supporting UA-cam.
I tried implementing this in python and it worked nicely for the FFT but for the IFFT I run into problems... does anyone know what could be wrong? I pretty much copied the FFT and just changed omega as described and the recursive call to the correct function. If I try for instance IFFT([1,-1]) I would expect [0,1] (function y = x), but I instead get [0,2]...
I'm glad you tried implementing it -- see description in my main FFT video and pinned comment for error correction -- there is a subtle error I made in the IFFT.
Thank you for offering -- the way I think that may be the most standard way to do this on UA-cam is to create a separate channel for this (e.g Reducible - Russian translation) so that viewers know it is affiliated with this content (this is the way that 3blue1brown does this for example with ua-cam.com/channels/CbgOIWdmYncvYMbl3LjvBQ.htmlvideos). If you are willing to put the time to create something like that and translate it (I must warn you, it takes a lot of time), it will be much appreciated. When you are ready to set something like that up and upload it, please feel free to shoot me an email (you can find the email on my Patreon page).
@@Reducible Thank you for answer! I'll create new channel named Reducible - Russian and I will send you url when I create at least three first videos, do you mind? In first I want to test my strength :)
3blue1brown published his animation system to github so that anybody can use it to make videos. It has a very distinctive style, so most videos made using it will look like 3blue1brown videos. github.com/3b1b/manim
Why doesnt this channel have the recommendation it deserves. Its by far the best CS edutainment channel I have discovered yet.
If only it had a more original aesthetic.. instead of copying so much from 3blue1brown
@@JamieT1986 I really don't mind that. Credit where credit is due, but it's a great way of presenting information. I wish it was used even more!
@ゴゴ Joji Joestar ゴゴ This guy also isn't as good at animating. Just look at 4:27 . The screen is absolutely crampt.
People should be more grateful that such amazing videos exist for free, irregardless of who makes them and what library they use. Most of us are happy to pay a small fortune for college lectures of far inferior quality. The fact that these animation-styled videos are on this for free is a gift. Least one can do is be nice and say thank you to the creator... "More original aesthetic" tell that to the colleges that use the same old power-point presentations for all their lectures...
@@JamieT1986he is using 3b1b’s Library to make animations, you idiot sandwich, stop your stupid complain, focus on the maths content !
I remember studying the FFT in college (some 50 years ago) and I used it throughout my career. I’m now retired, but I still love to watch videos on Maths. Yours are particularly clear; in fact, they are works of art. Excellent!
What field are you in? Where's FFT used a lot?
@@whitewalker608 math computer science physics
@whitewalker608 GPS, data compression (MP3, MP4, JPEG), it's the barebone for (any) control engineering tasks, you can multiply polynomials and large numbers easy which is used all over the place in crypo, and seismic detection.
Seismic detection was the initial driver, as people wanted to detect if anybody on the world was testing nuclear bombs or similar.
The same technic is used on earthquakes and tsunamis, and you can also use it in predictive maintenance of machines that can not fail.
This series alone earned you my subscription. It ain't much, but it's something. Thank you for the explanations.
What a lovely set of videos! it helped me refresh my memories from Signal Processing classes. Would have loved to see some butterfly diagrams though, they are very beautiful to look at and quite useful to visually understand the process.
Yeah, I actually had an explanation for it in the original script but I had to cut it out because the video was getting way too long with it included. I think a followup video to the FFT video is something that I will put on my list of TODO's and in that video, I may cover the butterfly diagram and also another perspective on DFT/FFT from a signal processing angle.
@@Reducible Well consider me subbed then, I will be waiting for the video. :)
@@Reducible still waiting for it
@@Reducible waiting for followup video
I love your channel! I run my high school computer club and your videos are an invaluable resource for both learning and teaching!
Wow, that's super cool! Thanks for sharing!
Who tf is teaching FFT applications in high school
@@prophane3330 exactly my opinion for the above comment lol
@@prophane3330 😂😂😂😂😂
@@prophane3330 It’s pretty normal)
Man... learning computer science with 3b1b style. Insta sub
you always make me cry, be its tower of hanoi, recursion in number of colour, now fast fourier. Thanks a lot for being so supportive
Finally I have found an explanation that makes sense. Many thanks!
thanks for this example, I wouldn't have understood the FFT without it
Thankyou so much for this very intuitive breakdown of fft ! Commenting so that ,this series will get the views it deserves.
I have been mulling over the FFT for WEEKS. Finally I have understood it, just by seeing that for 4 elements, the number of multiplications we did was 2*2+4 = 8 = 4 log2(4), compared to 4^2 = 16 for the DFT. For 8, it would be 4*2+2*4+8 = 24 = 8log2(8), compared to 8^2 = 16. I have an exam on this this Monday, so thanks to you if i ace the FFT question!
Thanks!
Why does this channel only have 20k subscribers?? Great content.
Amazing explanation and illustration!!!
There seems to be a small typo at around 2:09: P_e (x^2) -> P_e(x) and similarly for P_o.
Keep up the awesome work!
I think it should be P_e(x^2) = 5 + 2x^2 -> P_e(x) = 5 + 2x -> [5, 2] and similarly for P_o
Please continue making these videos they are so helpful
Your channel is awesome, I'm looking forward to seeing the next video!
i remember how i fought my way through this algorithm :D very good video!
I'd like to add a very interesting point. While working with discrete signals, it common to use Z- transform. It leads a vector to a polynom in the variable z. The interesting point is that a polynomial multiplication is the convolution of its coefficients. The discrete convolution can be performed with a "smart" internal product. "Smart" because of size of results, where the product starts and finishes for each and a back to front flip.
I'd be glad with a Z transform video. :)
I can't believe that this video only has such few views. Incredible visualization
mmmmmmmm , a master piece of art , thnx dude...
Wow , your channel is an University 😍
Could you do an example inverse FFT? Using the algorithm from the other video I can't get it to work. I think the 1/n which was a factor of the alternative omega is supposed to just be multiplied by the entire final product, but I'm sure I'm just missing something here because every source I check leaves the formula the same way.
Really love the content, reminiscient of 3blue1brown how you can make advanced topics so easy to understand.
Edit: Checked another source and I'm right, checked on the other video and it's fixed in the description. Should have checked there first I guess!
Second that! I decomposed FFT successfully, but IFFT remains unsolved.
Are you going to go over the butterfly diagram in the thumbnail from the last video?
I was going to in the original video, but I cut it out because the video was already pretty long -- I may add it into another video on the FFT/DFT if people are interested.
@@Reducible totally..
@@Reducible Yes pls
@@Reducible YWES
simply outstanding... respect
Wow how do you only have 6K subscribers. I really appreciate your mathematics content (probably as I'm a maths student)
We needs more videos about Wavelets!
Really nice! Although when I read the title I thought this would be about turning the recursive algorithm into an iterative version. Which is something I've just spent the afternoon doing, it was interesting. (Maybe I should just look up the "advanced" algorithm for any n)
Interesting to see the FFT from a algorithm/comp sci perspective.
Your videos are absolutely amazing. I did some of this in college so it's great to see this information presented with such clarity using clear language and modern techniques.
Specifically, though... I can't understand the link between FFT and what I understand as a Fourier Transform. For a normal Fourier transform, you're translating a time-domain signal into a frequency domain signal, or vice-versa. But with FFT you're translating between value domain (which could be time-domain) and a polynomial. What's the link between polynomials and sinusoidal signals? How can this function compute a discrete Fourier Transform? edit: It's on the tip of my brain. I can see how using very high order polynomials you're going round the unity root circle in a similar way you'd do for a Fourier Transform, but I still can't get the link.
Think of the convolution theorem, convolution in time domain is multiplication in freq domain. Convolution of the coeff representation (corresponding to polynomial multiplication) is the same as pointwise multiplication in the value representation.
@@smolboi9659 Thanks for this, it makes sense
you're awesome, thank you
So satisfying!
But that would be nice to see how fft works when n is not a power of 2 !
This is awesome!!
my brain is focked and I also subscribed, great vid yea
dammmn it this channel is so so so so good !!!!!
Amazing ! !😍😍
would be handy to see the original polynomial and its fft transformation on a graph together, along with some explanation of how this is useful
Your videos are very helpful, and I started feeling like I am ready to solve one programming problem related to FFT over rings modulo p.
However when I started implementing stuff, I realized I am missing some crucial part.
If FFT(polynomial) returns n values for power n polynomial, how we can compute poly1*poly2, if we have only n points? We need 2*n points.
I checked other implementations, and they are computing poly1 and poly2 for 2n points. After multiplication we have 2n points and can calculate 2n coefficients.
I think this video is just describing FFT, not necessarily poly1 * poly2. For that, we have to evaluate a value representation that will be of length 2n + 1. Furthermore, that needs to extended to the nearest power of 2.
@@johnfazzie3384 You can add dummy 0 coefficients and it will work just fine. In fact, if the number of actual coefficients isn't a power of 2 you _have_ to do so. For example, for a 4th degree polynomial you have 5 coefficients, you have to add at least 3 more to get 8 in order for this FFT implementation to work accordingly.
In the previous video on FFT you mentionned early the relation to time domain and frequency domain. The time signals we feed our FFT algorithm with are however not polynomials. Could you explain why this works not only for polynomial value pairs, but for time/frequency signals too? In time and frequency domain there greek letter omega is often used as circular frequency which is a real number in practial cases. Is there any link between these two omegas? What is the importance of frequency to this algorithm?
Amazing explanation and visualization! Thank you for another awesome video!
A small question. It seems that roots of unity were used to perform fewer calculations and use the repeating values for symmetrical pairs. However, is the code the time-consuming expression omega^j*y_o[j] still appears twice, so we don't benefit from using roots of unity as an input. Was it done to keep the code clear or did I get it completely wrong?
Good question. The way the code is written actually is fine. The key idea here is that instead of performing n multiplications n times as we do in standard polynomial evaluation, we perform the operation recursively instead. You are right that each recursive call take n operations to compute. But the key idea is that at every level of the recursion for both odd and even degree polynomials, the polynomials have half the degree of the original polynomial. This fact makes the entire algorithm run in O(n log n).
If you are familiar with recurrence relations, the formal recurrence relation here is T(n) = 2T(n / 2) + O(n) which when you solve it out, gives you O(n log n). Another way to see what's going on that may be more intuitive is drawing a tree like diagram of the recursion and amount of operations in each call. Generalizing that diagram, you will see it's still O(n log n).
I know this is from 8 months ago, but I thought it was worth adding to Reducible's reply - yes, assuming no compiler optimisations, the code here is slightly inefficient as omega^j*y_o[j] is repeated for both lines inside the for loop, when you could store the value of that in a temporary variable and use that twice instead. However, being able to make that optimisation is not the main benefit of using roots of unity, it is just a nice bonus - the time complexity of the algorithm stays the same regardless, you are just speeding up the calculation by a linear factor (as Reducible explained - the main benefit of using roots of unity here is how they allow you to split the problem recursively and therefore obtain O(n log(n)) time complexity).
Indeed, in this case, the actual speed-up here would be very small - omega is only ever raised to an integer power which can be done very efficiently and then you have a single multiplication which can be done even faster.
Also note that in many programming languages, the compiler will detect a repeated calculation where the input value is unchanged and will optimise it automatically, so I wouldn't always worry about tiny speed-ups like this unless you actually notice a performance cost after profiling your code.
5:00 I found out that the values you get from that loop is in pattern...for at least those two example and that is :
note:- I am a lot younger than you would expect so i don't get a most of things in the video i am just watching for little prior knowledge when i actually need to learn it properly
lets say those variables as FFT[ x, y ]
lets say those values you get after the loop as [ a, b ]
the values after the loops for a is a = x + y (initially I noticed it this way y = a - x)
the values after the loops for b is b = x - y (initially I noticed it this way x = y + b)
That's it I don't know if it is actually correct and I can't check too because I don't know how to so if you can then check if it's pattern is wright?
This is correct. When our input consists of two numbers X and Y, we are essentially evaluating the function f(t) = X + Yt in the points 1 and - 1. So for 1 we get f(1)=X+Y and f(-1)=X-Y.
this is awesome! Would be great if you add butterfly disgram.
I actually more confused after watching this video than i was from the previous one. The only question i had was how this thing ensures there are no complex coeficients after IFFT is done. But now it is so much more))) So based of assumtion that you are multiplying two polynomials of highest power x3, your final polinomial would have highest power of x6 thus n has to be 8? Full video with two FFTs one multiplication and one IFFT would be so much appreciated.
The reason that there are no complex coefficients after IIFT is that complex conjugates sum up and gives zero imaginary part.
Let's say the polynomials have degree n1 and n2. The resulting polynomial has degree n3 = n1+n2. You find n = the smallest power of 2 strictly above n3.
When representing as coefficients, you make both polynomials start out with their coefficients then pad out with zeros until you have exactly n coefficients.
Let's take the polynomials x^2+2x+1 and x^2+5. The degree of the polynomials are 2 and 2, thus the resulting degree is 4. You want n=8. (you cannot use 4 as you need strictly greater, so 8 is the smallest; you can do 16 or 32 but why would you?)
Then the polynomials are represented in coefficient form as [1, 2, 1, 0, 0, 0, 0, 0] and [5, 0, 1, 0, 0, 0, 0, 0]. You do FFT on each, you multiply pointwise, you do IFFT on the result and are done.
@@paulstelian97 p
@@paulstelian97 p
Thank You
Incredible visualization. In which program was this presentation made?
manim library for python
Very good
It'd have been complete if you used inverse FFT to change it back to polynomial representation too, but still great video!
Great video! Are you going to go over some algorithms in simulation such as enhanced sampling?
Great video , it was very helpful !
Also can you please tell us what are the changes that could be made to the code to convert it into an implementation of split radix fft ?
So, when we plug in the coefficients for a polynomial into the FFT, we get a set of values that this polynomial takes on. Got it.
So, how does this have anything to do with plugging in values of a periodic function and getting the coefficients of complex exponential functions?
(I'm in third semester electrical engineering, I've had my Algorithms and Data Structures course last semester and didn't understand the FFT. Now I'm learning about all the Fourier stuff in Calculus 3 and Signal theory. I would love to get a deeper understanding, because I find Fourier analysis fascinating!)
Thanks for this comment and there is a really interesting connection. I wanted to include it in the video, but it was already pretty long so I had to cut it out. Might make a separate video on it.
So the signal processing FFT is often taught with respect to taking a time domain signal and finding the coefficients of fundamental frequencies (which can be represented by complex sinusoids). In the signal processing FFT, you will actually see the DFT matrix defined slightly differently: en.wikipedia.org/wiki/DFT_matrix where w = e^-2pi /n. This is just a sign convention so either definition is fine as long as the DFT and inverse DFT are appropriately defined. But one interpretation of this that relates to this particular story with polynomials is similar to how we use the IFFT with e^-2pi /n to calculate the original coefficients of our polynomial from sampled roots of unity points, the FFT in signal processing contexts attempts to find the coefficients of basis vectors that represent complex exponential functions of fundamental frequencies.
Obviously there's a lot more details here that I can't hope to do justice in a single comment, but I hope that gives you some insight. Again, this might be another video since a lot of folks seem to be interested.
@@Reducible Yes please that would be so great!
Why we use Fourier transform in communication and laplace in control system??
Thanks
i wish someday i too will be able to understand what you are talking about
In this example the order of result is necessary
after that I try to calculate by myself with n=4 this algorithm seems not work very well
But how does this connect to sample values!!!!?? How does this example correlate to actually converting from time to frequency??? Is the input just you sample values over a slice of time?
His approach of viewing the FFT is a lot different than is normally taught in DSP classes so you might not immediately see the connection with frequencies. You can still see relation to frequencies by looking at the spacing between the values on the unit circle. Imagine going around the unit circle by stepping through the sample points: The more closely the values are spaced together the more steps it takes you to complete a full turn (therefore low frequency). High frequencies correspond to values that are more widely spaced apart. So, each set of sample points distributed around the unit circle corresponds to a base function in the DFT.
I like to think about the DFT as a set of dot products between the signal and each base function (so conceptually it is in no way different than any other linear coordinate transform). And the FFT is a way to avoid explicitly computing every single dot product.
@@compuholic82 right. the dot product part is a bit interesting. never thought of it that way
I did end up trying to code this up. in the video it looks like python, but there were some things in his code that did not at all looks like valid python code. So I had to do some reinterpreting what was going on. And I managed to get it working, although the output for some reason looks like it mirrored. like the elements past "n/2" were the same as the elements before "n/2" but flipped. Idk if that's a bug or just a part of how it works?
Also I find it kinda bizarre to use imaginary numbers. For a long time I never really understood it, but recently realized that imaginary numbers can just be thought of as 2D vectors. And the whole "e^(2*pi*i/2)" crap is actually just a cos,sin vector in disguise, which I would have never realised unless I dug deeper. And it would seem like it would be more sensible to view the algorithm in this way.
@@Andrew90046zero From your description it is not entirely clear what you were trying to do. But the "mirrored" elements may be entirely correct. An N-point DFT of any real-valued signal will only result in N/2+1 independent coefficients with an even symmetry. The DFT of a purely imaginary input signal will lead to an odd symmetry. And it's very common to use complex numbers here. Complex numbers really shine in DSP applications because manipulating exponential functions is a lot easier than working with trigonometric functions.
@@compuholic82 well I aim to use this in my own applications so I can detect patterns or any other type of situation where I need to break down a signal.
and I was just unsure if there was a bug in my code because the output array has mirrored values.
Also u can think of convolution theorem. Convolve in time domain is multiply in freq domain. Convolve poly coefficients is same as multiply out value representation.
Why don't you just square the output of the cosine wave and multiply the sine wave output with the cosine output, remove the DC offset and normalise to get a sine-wave and a cosine wave of twice the frequency?
It works when no. Of samples is power of 2.
Great Content! Can you make videos on Complexity classes
Interesting why Wolfram Alpha returns the half sized values: {5.5, 1.5 + i, 1.5, 1.5 - i}.
Much appreciated
Hi Reducible,
How to programmatically compute the value of Ѡ?
In ur example, u hav used hard-coded values [1, i, -1, -i] when n=4, and [i, -1] when n=2. What if n=8, or a much bigger value? How to then compute Ѡ = eˆ(2∏i/n)? Or, is there a constant value for eˆi ?
Thanks!
Hi Reducible,
Ѡ = e^(2∏i/N)
I think Ѡ can be implemented as cosΘ + i*sinΘ, where Θ = 2∏/N
The java object (if implementing in Java, for example) can handle the real part cosΘ and imaginary part sinΘ separately
example :
com1 = x1 + iy1
com2 = x2 + iy2
com1 + com2 = (x1+x2) + I * (y1+y2)
com1 * com2 = (x1x2 - y1y2) + i * (x1y2 + x2y1)
Makes sense?
Thanks
👍👍👍👍👍👍👍👍👍
1:05 "whenever you are lost one of the most helpful things you can do is completely break down.." ahh yes that I can do "a specific example" oh..
Do you offer the code via Patreon? Would be quite interested in that to polish my manim skills :)
I actually linked the repository for all the code for the video in the description :)
@@Reducible Ha! Did not see it :) Sometimes the solution is that simple :)
10/10
what would passing the coefficient vector into the ifft mean?
I would assume ifft is the inverse of the FFT and the coeffiecient vector is just the list of coefficients.
@@77tigers26 i know that, but if using the fft on the coefficient vector gives us the points evaluated at the roots of 1, what would passing the coefficient vector into the ifft physically represent
Any chance you'll start mirroring you videos on alternative sites like Bitchute? With youtube increasing censorship, it would be nice to be able to view and support you, without supporting UA-cam.
I tried implementing this in python and it worked nicely for the FFT but for the IFFT I run into problems... does anyone know what could be wrong? I pretty much copied the FFT and just changed omega as described and the recursive call to the correct function. If I try for instance IFFT([1,-1]) I would expect [0,1] (function y = x), but I instead get [0,2]...
I'm glad you tried implementing it -- see description in my main FFT video and pinned comment for error correction -- there is a subtle error I made in the IFFT.
@@Reducible Should have found that, thanks a bunch! Makes sense :)
Hi There Can u Make a Video on 3d Geometry?
Want more videos on dsa
Swagger balls.
now lets figure out how to implement complex number and e^(2pi i/n) in Python :d
This should not be too hard for you :) It is built in already.
you also can just use eulers formular
0:05
do more videos
Hi :)
Can I translate your videos for Russian audience and post results on my channel? Without monetization of course
Thank you for offering -- the way I think that may be the most standard way to do this on UA-cam is to create a separate channel for this (e.g Reducible - Russian translation) so that viewers know it is affiliated with this content (this is the way that 3blue1brown does this for example with ua-cam.com/channels/CbgOIWdmYncvYMbl3LjvBQ.htmlvideos). If you are willing to put the time to create something like that and translate it (I must warn you, it takes a lot of time), it will be much appreciated. When you are ready to set something like that up and upload it, please feel free to shoot me an email (you can find the email on my Patreon page).
@@Reducible Thank you for answer!
I'll create new channel named Reducible - Russian and I will send you url when I create at least three first videos, do you mind? In first I want to test my strength :)
@@homka122 Sure, sounds great!
+1
Do it again lol
Aren't you copying the format of the other channel? Or are you the same team?
Too fast! :( I'm so dumb.
Backtracking please
FALSE YOU DONT EXPLAIN FFT RECURSION
Use simple english words if you can 😮😆
Aren't you copying the format of the other channel? Or are you the same team?
3blue1brown let’s everyone use manim
3blue1brown published his animation system to github so that anybody can use it to make videos. It has a very distinctive style, so most videos made using it will look like 3blue1brown videos. github.com/3b1b/manim
seems to be under MIT license as long someone doesnt use 3blue1browns direct visuals templates within.