I made my own fun desmos thing based on this. Does some trippy stuff with the sliders, so that the function is centered at arbitrary values of x, not just zero. Enjoy! www.desmos.com/calculator/8ryzfxcyot
I think we can add negative terms like e^-x, e^(-x/2), e^(-x/3), … to the sum to make it converge on the positive side as far as it does on the negative side, even with a finite number of terms
Hi! Thank you for the great video. I just have a one really simple question, how did you get the equation -1-2e^x+4e^(x/2) equation in 12:01? What values did you input to get this equation? And also, how did you find the inverse matrix in 14:52? In other words, what kinds of processes did you went through? Thank You!
Thank You for the reply! I'm trying to play around with this summation of exponential functions. Btw, do you think we can find the error approximations in this technique just like the Lagrange Error bound in Taylor Series?
Interesting! This reminds me of Sturm-Liouville theory. I wonder if we can construct a Sturm-Liouville problem with Real exponential solutions. If we can, then the theory states that the solutions are a complete set to function space, and therefore any function can be expanded as a linear combinations of those eigenfunction. Or: an infinite series of exponentials. My idea is to use y"-k^2y=0 as the eigenvalue equation. This produces {exp(-kx), exp(kx)} as solutions. If we can find a correct combination of homogeneous boundary conditions we can get the set of eigenvalues {(k_n)^2}. And from there easily expand any function as a series of exponentials. I havent been able to find those correct boundary conditions. Maybe you can?
Something I've been wondering is since the mellin transform, laplace transform, fourier transform, etc. can be used to create infinite sums of time dependant functions based on their frequency domain, do the Taylor series and any other type of infinite sum expansions count as "frequency dependant" or are they separate phenomenon?
I haven't learned about the Fourier and Mellin transform yet, so I'm not sure how that frequency-domain kind of stuff works! My guess is that the Taylor series isn't frequency-dependent because it's explicitly based on the derivatives of the original function and Taylor's inequality, rather than coming from frequency considerations or integrals.
@@MuPrimeMath I wonder if it is though because a taylor series is a sum of polynomial terms to integer powers and the fourier series is imaginery exponentials to integer powers. So maybe instead of being the real part held constant (0 for fourier) and sampling points from the frequency domain, the Taylor series *could* be a sampling on the imaginary axis or a sampling of points from some other shape rather than a line
Benjamin Brady That doesn't really make sense. The idea that makes Fourier series work how they work is the imaginary exponent and Euler's identity. That's where the frequencies come in. If the Taylor expansion is done on the real line, then there are no well-defined frequencies on each term of the series.
Why not use hyperbolic sines and hyperbolic cosines? They will turn into two-sided exponential series, but they may (or may not) be easier to work with.
At 10:50 you mention that you haven't found a way to solve for an analytical equation for the coefficients a_i. You can solve this by making a slightly different sum S(x) = a_0 + a_1 e^(1x) + a_2 e^(2x) + ... and making the substation u = e^x. This will boil everything down into a tailor series which we already know the formula for. Finally you get: a_i = g^(i) (0)/i! where g(x) = S(ln(x))
There is something that I dont understand (maybe because my english is so bad): When a family of functions is enought to generate any other function? There is something like a space of functions (with every continuos function) and i want to know when a family of functions are a basis of this space. X^n is a basis (Taylor) Sin(nx) and cos(nx) are basis (Fourier) And now e^(x/n) is a basis too (Mu Prime Math) How many others basis are? Infinty i suppose, but how can we get it?
My guess is that any infinitely-differentiable function can be used to make some kind of series representation of other smooth functions. Some of those series will have limitations: for example, using only sin(nx) and cos(nx) means that the series is periodic, so it can't represent non-periodic functions beyond a certain range. In general, though, using the method that I showed in this video to set the derivatives equal should give a good approximation for any smooth functions!
Very cool. Maybe i wasn't listenimg very well and you already explained it, but is this useful in some fields of mathematics, or is it all just good fun?
This is very nice! But unfortulatly it can't make the approximation as large as we want!
5 років тому
For me - being as a not so talented :) fan of math - it reminds me somehow the complex version of Fourier series, at least from a far view point ;) I'm just wondering if there can be any connection in any way ...
I made my own fun desmos thing based on this. Does some trippy stuff with the sliders, so that the function is centered at arbitrary values of x, not just zero. Enjoy! www.desmos.com/calculator/8ryzfxcyot
Here's another fun one: www.desmos.com/calculator/ll08w6pbee
Nice!
Whoa! Exponential series is such an interesting idea!
I think we can add negative terms like e^-x, e^(-x/2), e^(-x/3), … to the sum to make it converge on the positive side as far as it does on the negative side, even with a finite number of terms
wow! That 5x5 matrix! You must have prepared a lot before! Thank you Mu prime!
Hi! Thank you for the great video. I just have a one really simple question, how did you get the equation -1-2e^x+4e^(x/2) equation in 12:01? What values did you input to get this equation? And also, how did you find the inverse matrix in 14:52? In other words, what kinds of processes did you went through? Thank You!
In both cases, I started with the matrix equation at 10:38 for a certain n, then calculated the inverse with a computer program!
Thank You for the reply! I'm trying to play around with this summation of exponential functions. Btw, do you think we can find the error approximations in this technique just like the Lagrange Error bound in Taylor Series?
Interesting!
This reminds me of Sturm-Liouville theory. I wonder if we can construct a Sturm-Liouville problem with Real exponential solutions. If we can, then the theory states that the solutions are a complete set to function space, and therefore any function can be expanded as a linear combinations of those eigenfunction. Or: an infinite series of exponentials.
My idea is to use y"-k^2y=0 as the eigenvalue equation. This produces {exp(-kx), exp(kx)} as solutions.
If we can find a correct combination of homogeneous boundary conditions we can get the set of eigenvalues {(k_n)^2}. And from there easily expand any function as a series of exponentials.
I havent been able to find those correct boundary conditions. Maybe you can?
Something I've been wondering is since the mellin transform, laplace transform, fourier transform, etc. can be used to create infinite sums of time dependant functions based on their frequency domain, do the Taylor series and any other type of infinite sum expansions count as "frequency dependant" or are they separate phenomenon?
I haven't learned about the Fourier and Mellin transform yet, so I'm not sure how that frequency-domain kind of stuff works! My guess is that the Taylor series isn't frequency-dependent because it's explicitly based on the derivatives of the original function and Taylor's inequality, rather than coming from frequency considerations or integrals.
@@MuPrimeMath I wonder if it is though because a taylor series is a sum of polynomial terms to integer powers and the fourier series is imaginery exponentials to integer powers. So maybe instead of being the real part held constant (0 for fourier) and sampling points from the frequency domain, the Taylor series *could* be a sampling on the imaginary axis or a sampling of points from some other shape rather than a line
Benjamin Brady That doesn't really make sense. The idea that makes Fourier series work how they work is the imaginary exponent and Euler's identity. That's where the frequencies come in. If the Taylor expansion is done on the real line, then there are no well-defined frequencies on each term of the series.
Why not use hyperbolic sines and hyperbolic cosines? They will turn into two-sided exponential series, but they may (or may not) be easier to work with.
That would be possible too! I was just trying to work explicitly with exponential functions.
At 10:50 you mention that you haven't found a way to solve for an analytical equation for the coefficients a_i. You can solve this by making a slightly different sum S(x) = a_0 + a_1 e^(1x) + a_2 e^(2x) + ... and making the substation u = e^x. This will boil everything down into a tailor series which we already know the formula for. Finally you get:
a_i = g^(i) (0)/i! where g(x) = S(ln(x))
The problem with that is that you may have g(ln(x)) but not g(x). And when you substitute x=0 you get some nasty minus infinities for every ln(x)
There is something that I dont understand (maybe because my english is so bad):
When a family of functions is enought to generate any other function?
There is something like a space of functions (with every continuos function) and i want to know when a family of functions are a basis of this space.
X^n is a basis (Taylor)
Sin(nx) and cos(nx) are basis (Fourier)
And now e^(x/n) is a basis too (Mu Prime Math)
How many others basis are? Infinty i suppose, but how can we get it?
My guess is that any infinitely-differentiable function can be used to make some kind of series representation of other smooth functions. Some of those series will have limitations: for example, using only sin(nx) and cos(nx) means that the series is periodic, so it can't represent non-periodic functions beyond a certain range. In general, though, using the method that I showed in this video to set the derivatives equal should give a good approximation for any smooth functions!
Very cool. Maybe i wasn't listenimg very well and you already explained it, but is this useful in some fields of mathematics, or is it all just good fun?
Like I said at the beginning, I was working on this because of Dr. Peyam's question about whether the derivative operator was diagonalizable!
@@MuPrimeMath Got it. Thanks for the reply.☺
Well, this idea is sort of applicable with generalized Hahn series.
This is very nice! But unfortulatly it can't make the approximation as large as we want!
For me - being as a not so talented :) fan of math - it reminds me somehow the complex version of Fourier series, at least from a far view point ;) I'm just wondering if there can be any connection in any way ...
Here is a link to the Dr Peyam video on diagonalizing the differential operator that inspired this video:
ua-cam.com/video/A6jPTGWev9Y/v-deo.html
I will do my best to find the solution for that problem
In case it's useful, remember that you can change 1/2, 1/3, 1/4, ... to some other decreasing sequence if needed!