It looks like the Cesaro sum of the Fourier series ends up _undershooting_ the discontinuities. What if we add the Fourier series and its Cesaro sum together and average _that_ ?
The Gibbs phenomena is a overshoot of about 0.09, regardless of the number of terms you sum. The Cesaro sum of the first 100 terms of the Fourier series only undershoots by about 0.01, which is much smaller. Because of this, averaging the sum of the Fourier Series and the Cesaro series will not fix the undershooting - what you get will still undershoot by about 0.01. But what will lessen the undershooting is averaging the Cesaro series again. Then the sum of the first 100 terms will only undershoot by an error of about 0.0001
You could do a weighted average to get whatever level of undershooting or overshooting you wanted though, right? It's just that all Cesaro is usually better for not causing crazy spikes which is usually the biggest problem
There are several things I would like to add to the discussion: 0) The process of finding the averages of the partial sum sequence can be iterated, and this process is called Hölder summation, denoted (H, k), where k is the number of times the process is done. In particular, (H, 0) denotes the case where we work with ordinary summation, and (H, 1) the case with Cèsaro summation. 1) This idea of using the averages has a lot in common with the Cauchy principal value operator, which uses an idea of the average of a function at a singularity. For example, the CPV of the Riemann ζ at s = 1 is equal to γ, because lim (ζ(1 - ε) + ζ(1 + ε))/2 (ε -> 0). 2) Really, the idea of Cèsaro summation/Hölder summation is more general. This is not about summation, but about the limit of sequences to begin with. For example, if a sequence diverges, but its sequence of averages converges, then you may say that the Cèsaro limit of the sequence exists. Done this way, it is meaningful to say that the Cèsaro limit of (-1)^n as n -> ∞ is equal to 0, even though the sequence diverges. Let me give another example. Consider the terms 1/4 + (2n + 3)(-1)^n/4. The Hölder limit (H, 2) is equal to 1/4, with the insinuation that the (H, 2) limit of n(-1)^n is equal to 0 as n -> ∞, even though both the Cèsaro limit and the limit do not exist. Incidentally, why did I choose this particular example? Because those terms are the nth term of partial sums of (n + 1)(-1)^n, so indeed, the (H, 2) summation of (n + 1)(-1)^n is equal to 1/4, as others have pointed out in the comments. You can always take it further, and the result is that, in general, the (H, k) limit of n^(k - 1)(-1)^n is equal to 0 as n -> ∞.
I had the pleasure of taking one of your calc courses last term, and you are by far one of the best teachers I’ve ever had. I love your channel , too - stuff like this is fascinating, and your tutorials are the only reason I’m at all passing my current courses. Thanks for being awsome, Trefor
0:15 I'm going say something really pedantic, but nonetheless important. Sequences don't "eventually converge". At any given partial sum you won't be at the limit, and the process of assigning a value to an infinite sum IS NOT a sequential one. It is not that you do thing 1, then thing 2, then thing 3 forever and after doing infinitely many things, you get a particular result. We look at the definition of the sequence, not the terms. Consider an infinite sum of random real numbers between 0 and 1. It is utterly incoherent to suggest that this could have a value, because even though a convergent series like 1/2+1/8+1/16... is possible, it has probability 0. A sum of random reals is incoherent, because it has infinite information that cannot be compressed to a single value. When we take something like 1/2+1/8+1/16... The terms arent what we are assigning a value to, it is the function that generates each element and the assigned boundary conditions rhat we are assigning a value to. It is an assignment, not an evaluation. We define the infinite sum to have a value based on criteria to keep it well behaved. We do this not because it is true that the series sums to a value, but we assert that it does because to do so it is useful.
Thanks for the video! I'm very surprised I hadn't heard of the fejer approximation before. It seems very useful for dealing with signal processing issues
Repeating Cesáro summation for 1-2+3-4+… the regularized sum converges to ¼. Or maybe you could also say that regularization of the partial sums of 0 and ½ gives you ¼ as well. Abel called these answers and attempts “the devil’s work”. But nevertheless he devoted his mathematical life to it.
I thought of this when I watched the video too. Could there be 1st-order Cesáro, 2nd-order Cesáro and more? Is there any sum that does not converge even in the inf-order Cesáro but can be easily represented?
@zaffreknightx3445 The Cesáro summation is a limit where the arithmetic means of a series of n elements converges as n tends to infinity. Take the Grandi series 1-1+1-1+... as in the video. The partial sums are 1 0 1 0 1 0 1 0 etc. The sequence has partial sums as well, 1 1 2 2 3 3 4 4 5 5 etc. Now look at the arithmetic means and you see it tends to a half. 1/1, 1/2, 2/3, 2/4 etc. It means that for every two elements the growth will be one, so on average a half. It's statistics... This only works as one of the resulting sequences converges. Take 1+2+3+4+... where there will be no converge for any of the sequences, as it always tends to infinity, as do the averages. Thus no Cesáro "summation".
Adding on to what @jeroenvandorp said, there are infact iterated Cesaro summations, which can sum more complicated series as you would hope. For example, first-order Cesaro summation is unable to sum 1 - 2 + 3 - 4 + 5 - 6 + … but second-order Cesaro summation assigns it a value of ¼. There are, however, still many series that cannot be Cesaro summed with any order. 1+2+3+4+5+… is one such example.
wouldn't 1/4 just be the first-order regularization anyway? the 0 and 1/2 came from regularizing HALF of the original series anyway (computing the sum of the odd-placed and even-placed elements separately), meaning that the "n" used for each sum is actually n/2. wouldn't keeping that into account make 1/4 the answer anyway? (0/2 = 0, (1/2)/2 = 1/4)) if this is wrong, i'm just not seeing how taking averages of two separate groups of partial sums using certain criteria and getting fittingly different answers implies that the series diverges; it's the behavior of the two separate groups together that defines the sequence.
@@nicholasmoffett4327 Hi, you can’t “split” infinite and divergent series or swap the order. A series you can regularize is not the same series where you change the order, if only two elements. And you can’t sum them, because they’re undefined as far as normal summation is concerned. Basically the “answer” here tells you something about the “progression” or “development” of the series. In this way you can make (meaningful) sense by assigning this value to it. Well, the video illustrates that neatly!
Fourier-Series have really gotten me into maths. I'm currently studying and I've used Fourier-Series in multiple applications: Analyzing signals (audio specifically), image processing and camera sensors. I was aware of Gibb's Phenomenon but didn't know that you can kinda fix it for practically finite sums. Great Video as always!
There is an analog of this to process control theory. A dynamic linear system is “stable” if the state variables converge to a finite value. It is unstable if any one state variable does not do that. But then similarly, it felt wrong to say a state variable that just oscillates between two finite values is “unstable.” So when the variable does not converge to a single value, but you could define a finite bound that the state variable value never exceeds, then it is called “Lyapunov stability.” It reminds me of why your 1,-1,1,-1,… “converges” but 1,-2,3,-4,… does not. There is no bound you could define to contain all the values of the second series.
Since applying the Cesaro sum to an already converging sequence does not change the value of convergence, what would happen if we just kept taking Cesaro sums of Cesaro sums until it converges? Like in the 1 - 2 + 3 - 4 + ... case, we get a sequence that converges to 0 for even values and 1/2 for odd values. If we took the Cesaro sum of that sequence, we would get a value of 1/4. If we take the limit of taking infinite Cesaro sums, would all oscillating sequences converge to some value?
The answer is yes, but not in a very satisfying way: if you take the average infinitely many times, you end up with a constant sequence, considering of just the first term of your original sequence! (And this is true for ANY sequence, not just ones that oscillate) To see this in action with the 1-2+3-4+... example, the sequences look like: 1, -1, 2, -2, 3, -3, ... 1, 0, 2/3, 0, 3/5, 0, 4/7, ... 1, 1/2, 5/9, 5/12, 34/75, 17/45, ... 1, 3/4, 37/54, 89/144, ... 1, 7/8, 263/324, ... Every sequence after the third converges to 1/4, but notice that the second term of each sequence is rapidly converging to 1. This is in fact happening for every term: as you go further down, the beginning of each sequence gets closer and closer to 1,1,1,... and it takes longer and longer to leave 1 and start approaching 1/4. If you take the limit of these sequences, you end up with just 1,1,1,... and the more interesting convergence to 1/4 is totally lost.
@@japanada11Interesting! I hadn't considered that. I wonder if truncating the first several terms before performing the second sum would be enough to get the "sensical" result of 1/4
@obaked7013 That's a good idea, but you'd have to do some sort of truncation every single time; if at any point you stop truncating, then whatever your next sequence is, its first term is going to end up dominating in the long run. One thing you could do, for example, is drop the first average you compute at each step (since it's just the same as the previous first term). So for 1 - 2 + 3 - 4 + ... you would get 1, -1, 2, -2, 3, -3, 4, -4, ... 0, 2/3, 0, 3/5, 0, 4/7, 0, 5/9, 0 ... 1/3, 2/9, 19/60, 19/75, 193/630, 193/735, 377/1260, ... 5/18, 157/540, 1013/3600, 3007/10500, 8303/29400, ... where the first term in each sequence is the average of the first _two_ terms in the previous sequence (and the second term is the average of the first three terms above, and so on). Oddly enough, the data suggests that the sequences end up converging to a constant sequence with constant 0.28383382... I honestly have no idea what's going on here. But one thing I can say for sure is that if you do this truncating thing, it's no longer true that you'll always end up with a constant (or even convergent) sequence in the limit. As an example, consider the series 1 - 4 + 14 - 64 + 362 - 2428 + 18806 - ... then if you compute the partial sums and then the sequences of averages you get 1, -3, 11, -53, 309, -2119, 16687, ... -1, 3, -11, 53, -309, 2119, -16687, ... 1, -3, 11, -53, 309, -2119, 16687, ... where again, the kth term in each sequence is the average of the first k+1 elements in the previous sequence. These sequences just keep flipping back and forth and clearly never converge! (These sequences are A000255 from the OEIS but with alternating signs).
Extending the Cesaro summation by infinitely repeating it would give 1/4 for the example which diverges using Cesaro summation. Is this a concept, where you can say a series converges if it converges for some finite N repetitions of Cesaro summation? If I understand correctly, the Cesaro summation smoothing of the Fourier series is basically replacing each partial sum (the truncated Fourier series for some N) with an average of the partial sums for (1, .. N). I am curious and might investigate what the infinite repetition of Cesaro summation would look like. At a glance, it seems like it is shifting the weighting each term has to earlier and earlier in the sequence, which means this probably has limited applications for smoothing Fourier series (which by necessity needs to truncate at some finite value).
I was reading Dirac's 'Principles of Quantum Mechanics' the day after this video came out and he mentioned this way of regularizing diverging sums and because of this video I was equipped to face that!
I can see how Fejér's approximation can be very useful. The Gibb's phenomenon might lead to undesirable over correction in certain data models. I guess these are highly specific cases but I will file this under "weird knowledge acquired" and be happy about it.
Imo, the series in the title makes most sense to think of as simply describing an idealized square wave. Personally I think the notions of convergence and divergence need another term, indicating that it neither converges nor diverges: it stays static. The cesaro summation/convergence makes sense for it still, but that describes the convergence of the sum of the series, not the convergence of the series itself. The summation surely converges to 1/2, but to conflate that with the series itself converging is like conflating a formula with its derivative or its integral. Edit- Okay yeah and there's the point: if the series itself converges, then the series formed by its summation will converge too. That certainly makes sense and should follow logically from the rest of this.
glad to see unconventional convergence being tackled more and more! certain people will vehemently argue that you *can't* assign values to these series, but this demonstrates a fundamental misunderstanding of how maths works. infinity is much more complicated and beautiful than high school makes it out to be, and videos like these shed light on that beauty!
In the case of 1-2+3-4+... even though the first averages don't converge, alternative between 0 and 1/2, if we apply the same averaging technique again, it converges to 1/4. We could say that a sequence converges at level n if, when we apply the averaging of partial sums operation n times, but not n-1, the series converges.
Ahh, so we are damping the oscillations - neat. Well, that opens a load of other possibilities. Any weighted average will also work as long as the maximum weight for any term decreases. So, given that the partial sums for small n are going to be the most disruptive if they get a lower weight then convergence should be more rapid but the calculation more complex and less intuitive.
1:28 Interesting, what is that sequence called? Interestingly it also seems to be the addition of every prime number sequence, in other words, 1/2 + 1/4 + 1/8... + 1/3 + 1/6 + 1/9... + 1/5 + 1/10 + 1/15, but I could be wrong about that it was just my first intuition.
Whoops! I actually meant to say 1/2 + 1/4 + 1/8... + 1/3 + 1/9 + 1/27... + 1/5 + 1/25 + 1/125 which is quite a bit different then what I posted, indeed my sequences got progressively worse as I was typing...
Also, what if you keep doing this cesaro sum but on itself, and keep doing that, if it does end up converging, you write down how many times you had to do that to get it to converge, and since that actually corresponds to the power of the total, you could even make it continuous, and since there’s often a continuous representation of these sums as well, we’d have an entire plane of ways to check if a list converges, which would be excellent for a complex plane, or maybe I’m reading too far into it because to do repeated infinite sums isn’t as easy to infinitely nest, even though their continuous cousin, the integral can, but the other problem is that there really only should be two different constant outputs for every series, the thing it actually converges to, and how many cesaro sums you had to make to get there, and what it actually converges to, but who knows? I know I’m definitely gonna have something to experiment with for a while
if the average of partial sums ends up diverging for either odd or even terms, what's stopping us from averaging the odd and even together and Cesaro it again?
I almost mentioned this in the video but was worried about being confusing, but yes you can absolutely take the average of the averages - and so on. For series we often don't care about the sum toooo much compared to just the convergence, so your A and B yes have different cesaro sums but what I really care about is that they are both cesaro summable.
Nothing, except that's technically a new type of operation that is going to have its own properties and consequences. Nothing really stops you from doing anything in mathematics, the question is just what that operation will help you with.
Brilliant video. I’ve been waiting for a video on the square wave. Could you make a video on the saw wave, I’m trying to solve it via complex contour integrals but I’m having trouble defining the domain of the complex values function.
Well, Fejér's Approximation is nice and all - but what does occur in nature near discontinuities? IIRC oscilloscopes definitely show the "Gibbs spikes" near those.
It's been a long time, but IIRC Abel summation uses lim_(r->1+) sum_n{a_n r^n}. I think it agrees with Cesaro when that converges, but sometimes does even more?
Maybe you could do a video on .9 repeat. .9 repeat is not infinitely close to 1, it is not as close to 1 as you want or even imagine. .9 repeat = 1 That "=" sign is there for a reason. Equal equal and one more time - equal.
This type of summation seems to have value and makes sense only if the original series actually converges in the first place - like in the Fourier example. In the other cases, it also seems that in order to produce some finite value the divergence is of the type where each term bounces above and below some in between finite value. And in this case then I'm not sure it really means anything worthwhile. Having said that, it may be the the A(n) series convergence may give some clues about how S(n) diverges.
Couldn't you just do a Cesaro Sum of a Cesaro Sum? Eventually you'd end up at one that converges, and you could even define the original sum as converging to that result by treating the regularity as an analytic continuation, no? Then, the real question is, are there any sums that will never converge, no matter how many times you do the Cesaro Sum? These could then be regarded as the "true" divergent summations.
This violates the "regularity" property that if the original sequence converges the new concept converges to the same thing. The reason why average of the partial sums is so nice is that say the first term appears in n of the partial sums and is divided by n so gets a "Weight" of 1 which is appropriate for its contribution to the sum.
How would the average of the elements help you find the sum??? If you take 1+1/2+1/4+1/8+…, the averages are: 1, 3/4, 7/12, 15/32,… (2-1/2^n)/n which goes to 0. This does not help you calculate the sum in any way.
Wait, so the Cesaro version of the Fourier series is just a Fourier series with different coefficients, right? And still it converges to the same original function? That feels wrong, non-unique Fourier series! I guess in practical engineering terms, is the tradeoff that convergence is slower?
this "Gibbs phenomenon" is very similar to a common phenomena in the world of electronics referred to as "ringing". especially with signals like the square wave shown in the video which are extremely common in digital electronics. it's interesting how nature behaves similarly to the Fourier approximation.
You say the new approximation is better, but not how, except maybe it “looks” better. The normal Fourier series approximation of the square wave is the best LSE version. Also, it appeared form your graphics that at the location of the max errors (ie Gibbs peak) the error is the same in both approximations. This implies (visually) that the L_inf error is the same
This really depends on the application as to what "type" of error is more problematic. With Fejer approximations the error falls off in the right direction, i.e. dropping down when the discontinuity jumps down, while Gibb's phenomenon overshoots and jumps up.
@@DrTrefor yes agreed, it clearly depends type or error criteria one is concerned or optimizing against. I dealt with digital filter design for a few decades and was frustrated with folks who despised the Gibbs overshoot and would choose to smooth it out when they often implicitly cared about minimizing signal energy error and the solution with the Gibbs overshoot was the best solution for them. All that said, does the solution you presented minimize some specific error measure? If so, that would be interesting/cool. Thanks
The Gibbs phenomenon looks like a case of underdamping. If this analogy works, I wonder whether the Fejer approximation is critically dampened or overdampened.
@@angelmendez-rivera351 True. AFAIK, though, Abel extends Cesàro and IMO can be more easily applied to sums such as 1-2+3-4+..., since we don't need to iterate. Again, these views are my opinions, and I think we all know various sayings about opinions...
@@tomkerruish2982 Abel summation is indeed stronger than Cèsaro summation, but as far as motivating definitions is concerned, when averaging is involved, Cèsaro summation and Hölder summation are appropriate, or Nørlund means in general.
there is something more important going on: recursion. you have a value that both solves for a finite value, but also expands to a pattern. it has many applications. A = 1 + x A (1 - x)A = 1 A = 1/(1-x) = 1 + x(1 + x A) = x^0 + x^1 + x^2 + ... + x^n + x^{n+1}A A[-1] = 1 + -1 + 1 + -1 + ... = 1/(1-(-1)) = 1/2 see? the 1/2 has NOTHING to do with averages. evaluate its derivative at -1, and you get 1/4 [d/dx]A = 1/((1-x)^2) = 1/4 = d[x^0]/dx + d[x^1]/dx + d[x^2]/dx + ... = 0 + 1 (-1)^0 + 2 (-1)^1 + 3 (-1)^2 + ... = 0 + 1 + -2 + 3 + -4 + 5 + .... and take this a bit farther and you show the -1/12 = 1 + 2 + 3 + 4 + .... result in a very rigorous way. it's a bit long to show here. it is most related to recursive sequences like this: S = -1 (1-2)S = 1 S = 1 + 2 S = 1 + 2 (1 + 2 S) = 1 + 2 + 4 + ... + 2^n + 2^{n+1}S = Sum[S,n] + 2^{n+1}S S - 2^{n+1}S = Sum[S,n] S(1 - 2^{n+1}) = Sum[S,n] (-1)(1 - 2^{n+1)) = Sum[S,n] 2^{n+1} - 1 = Sum[S,n] ie: we just derived a closed form for n expansions of the sum 1+2+4+... it is a very general phenomenon. Watch it deal with 1.0 = 0.99999.... easily: T = 9/10 + 1/10 T (1 - 1/10)T = 9/10 (9/10)T = 9/10 T = 1
what is happening is that infinite strings of digits can be alternate representations for finite numbers! ...9999999 = -1 ie: an infinite string of 9s plus 1 equals 0. it's -1. and this is exactly how 2s complement computer arithmetic works. it wraps around so that for the group, negative and fractional values are simply represented as values with an infinite number of digits. X = 9 + 10 X = 9 + 90 + 900 X = 999999999.....9999 = 9 + 10 X (1 - 10 ) X = 9 (-9)X = 9 X = -1 it only relies on the use of the equals sign meaning "A=B says A can be rewritten as B, and B can be rewritten as A". there are no appeals to infinity at all. note this: -1 = ...99999999 -2 = ...99999998 -3 = ...99999997 ie: -3 + 3 = ...9999997 + 3 = 0 Y = 3 + 10Y = ...3333333333333 (-9)Y = 3 Y = -1/3 prove it... multiply it times 3 3 Y = 9 + 10 (3Y) 3Y = ......999999 this is a lot like what happens in computer bits in a register. negatives and fractions are represented in binary with the same bits as the positive numbers use. a representation of -1/12 ... if you show it in base13, then it's a string of 1 digits. V = 1 + 13 V (1 - 13)V = 1 V = -1/12 so when you get a proof that 1+2+3+4+... = -1/12 ... it means that it's the same number as the base13 number with 1 in all digits. the lesson is that numbers with an infinite number of digits can be alt representations of finite numbers, ie: infinity isn't a number. if you can't accept it, then consider this: S = -1 -1 = 1 + 2(-1) = 1 + 2 + 4(-1) = 1 + 2 + 4 + 8(-1) S = 1 + 2 + 4 + 8 S = 1 + 2 + 4 S = 1 + 2 S
Fejer's approximation looks worse to me than the naive fourier. It is worse to persistently be 10% off than to have a few oscillations of about 10% near the discontinuity, and the graph zoomed in shows that Fejer's approximation is near the deepest trough of the Gibbs error region instead of averaging out like a good approximation would do.
Ultimately it depends on your specific application what kind of errors are more or less problematic. The Fejer's approximation drops off in the "right" direction, going down when the jump drops down vs Gibb's phenomenon which overshoots upwards.
Although, an example of something that will never converge under this system does exist, in fact I can name a specific function, it’s sin(round(x)), so simple, but I don’t believe it ever actually converges in this system, due to its pseudo random nature, I don’t know for sure, will again, have to test.
I thought you were going give another video on a weighting function similar to Numberphile's recent video on the zeta(-1)=-1/12 where Tony uses a regulating function to get the answer without the infinity. ua-cam.com/video/beakj767uG4/v-deo.html
Let's say we have 2 infinite series: A = 1-1+1-1+1-1+1-1+.... B = -1+1-1+1-1+1-1+1-... can we say that the two infinite series are equal or not (we're just switching the order of every pair)? what if we look at it from the point of view of partial sum and Cesaro summation?
The Cesaro sum for B is -1/2. And both 1/2 and -1/2 are really unsatisfying to me, because 0 is the more prominent candidate for some sort of an average of the series. It's almost an expected value of the series, but, well, the terms of the series are not random really.
@@NoName-cp4ct you get 0 when averaging both A and B... so it almost looks like to get the "nice" (0) answer to alternating series like this "both ordering combinations must be considered"
I do not like this title: 1+1-1+1-1+1.... is not a sequence, hence I do not understand how one say it does or does not converges. Maybe we think of it in term of the sequence of partial sums, but in this case this is just the sequence 1,0,1,0,1,0,1,0,1, and this one shouldn't raise issues. For this reason, a better title would be " making 1+1-1+1-1+1-1... converge" or smtg like that. Please tell me how wrong I am, it would a pleasure.
0.9999999999999999999...=~ing. a square paper also papering,too? Not Universe but also universing,too? We couldn't define smallest cell size of a square paper. Actually it is a smallesting?smallesing largesting couldn't define one.
I'm not good at math but I have a friend who told me that the sum 1-1+1-1+1-..... has the same result as the limit when n approaches infinity of the function 0^^n
Depends on your axioms and applications, but both can be correct. Your answer requires us to say that convergence can be a set and not a singular value. We also lose some properties of the Cesaro and its extensions-we could say that 1-2+3-…=1/4 with the point-valued extension, but no set-valued convergence exists. But we could extend it further and get some kind of “function-valued” convergence. It’s all a matter of choice and what properties you are exploring of different types mathematical objects. It’s why we choose to leave x/0 undefined in elementary algebra, but there are many ways it can be defined in more specific, self-consistent algebraic fields (e.g., calculus vs. extended reals vs. Riemann sphere vs. hyperreal/surreal infinities&infinitesimals). Also see: the inverse function of f(x)=x^2 ->g(x)={sqrt(x),-sqrt(x)} when we consider set-valued functions to be acceptable instead of only point-valued functions.
Answer, no. It is not a convergent series. Being off screen doesn’t change the nature of the series. We can use certain methods to obtain an associated value that is helpful. The series still diverges to infinity.
Absolute nonsense.! 1/2 + 1/4 +1/ 8 and so to 1/512 + 1/1024 and so on, can NEVER EVER get to magical number 1. There will ALWAYS be that very tiny minuscule bit to be added later on, but adding that little bit or the NEXT little bit, and the little bit after that will always leave a little bit that won't make the whole total to be EQUAL to the theoretical 1! 3Blue1Brown did an excellent video proving my very point. Go look it up.
You need to understand the definition of infinite summation. We first define Sn = sum from k=1 to n of (1/2)^k. Now we define sum from k=1 to infinity of (1/2)^k to be limit as n approaches infinity of Sn which converges to 1, QED.
It sure can get to the magical 1 - you just add all the elements in the infinite set. By your reasoning .9 repeat is not equal to 1. But they are equal in the true sense of being equal. .9 repeat is a real number and 1 is a real number. They both correspond to exactly the same point on the real number line. Hence they are equal. This whole idea of not being able to physically add up an infinite number of values in a finite time misses the point entirely. It is possible to take the infinite set as a whole and have an infinite number of positive values add to a finite sum. If it didn't, reality in this universe would not exist. If 3B1B claims that 1/2 + 1/4 + 1/8 + ... 1 then he is wrong on this one. It's that simple.
@@islamicguidance7204 You are bang on. I've had this discussion a thousand times to no avail. There are simply some who will just never get it no matter how many times you try.
It looks like the Cesaro sum of the Fourier series ends up _undershooting_ the discontinuities. What if we add the Fourier series and its Cesaro sum together and average _that_ ?
It's true, but somehow undershooting is more desirable in the sense that we are dropping down on the discontinuity.
The Gibbs phenomena is a overshoot of about 0.09, regardless of the number of terms you sum. The Cesaro sum of the first 100 terms of the Fourier series only undershoots by about 0.01, which is much smaller.
Because of this, averaging the sum of the Fourier Series and the Cesaro series will not fix the undershooting - what you get will still undershoot by about 0.01. But what will lessen the undershooting is averaging the Cesaro series again. Then the sum of the first 100 terms will only undershoot by an error of about 0.0001
You could do a weighted average to get whatever level of undershooting or overshooting you wanted though, right? It's just that all Cesaro is usually better for not causing crazy spikes which is usually the biggest problem
@@romajimamuloI'm guessing that won't generalize. You could do that for the square wave, but for another function, the approximation could be worse.
Then add that sum to the cesaro sum. Then add that to the first sum.
There are several things I would like to add to the discussion:
0) The process of finding the averages of the partial sum sequence can be iterated, and this process is called Hölder summation, denoted (H, k), where k is the number of times the process is done. In particular, (H, 0) denotes the case where we work with ordinary summation, and (H, 1) the case with Cèsaro summation.
1) This idea of using the averages has a lot in common with the Cauchy principal value operator, which uses an idea of the average of a function at a singularity. For example, the CPV of the Riemann ζ at s = 1 is equal to γ, because lim (ζ(1 - ε) + ζ(1 + ε))/2 (ε -> 0).
2) Really, the idea of Cèsaro summation/Hölder summation is more general. This is not about summation, but about the limit of sequences to begin with. For example, if a sequence diverges, but its sequence of averages converges, then you may say that the Cèsaro limit of the sequence exists. Done this way, it is meaningful to say that the Cèsaro limit of (-1)^n as n -> ∞ is equal to 0, even though the sequence diverges.
Let me give another example. Consider the terms 1/4 + (2n + 3)(-1)^n/4. The Hölder limit (H, 2) is equal to 1/4, with the insinuation that the (H, 2) limit of n(-1)^n is equal to 0 as n -> ∞, even though both the Cèsaro limit and the limit do not exist. Incidentally, why did I choose this particular example? Because those terms are the nth term of partial sums of (n + 1)(-1)^n, so indeed, the (H, 2) summation of (n + 1)(-1)^n is equal to 1/4, as others have pointed out in the comments. You can always take it further, and the result is that, in general, the (H, k) limit of n^(k - 1)(-1)^n is equal to 0 as n -> ∞.
I had the pleasure of taking one of your calc courses last term, and you are by far one of the best teachers I’ve ever had. I love your channel , too - stuff like this is fascinating, and your tutorials are the only reason I’m at all passing my current courses. Thanks for being awsome, Trefor
0:15 I'm going say something really pedantic, but nonetheless important. Sequences don't "eventually converge". At any given partial sum you won't be at the limit, and the process of assigning a value to an infinite sum IS NOT a sequential one. It is not that you do thing 1, then thing 2, then thing 3 forever and after doing infinitely many things, you get a particular result. We look at the definition of the sequence, not the terms. Consider an infinite sum of random real numbers between 0 and 1. It is utterly incoherent to suggest that this could have a value, because even though a convergent series like 1/2+1/8+1/16... is possible, it has probability 0. A sum of random reals is incoherent, because it has infinite information that cannot be compressed to a single value. When we take something like 1/2+1/8+1/16... The terms arent what we are assigning a value to, it is the function that generates each element and the assigned boundary conditions rhat we are assigning a value to. It is an assignment, not an evaluation. We define the infinite sum to have a value based on criteria to keep it well behaved. We do this not because it is true that the series sums to a value, but we assert that it does because to do so it is useful.
Thanks for the video! I'm very surprised I hadn't heard of the fejer approximation before. It seems very useful for dealing with signal processing issues
Repeating Cesáro summation for 1-2+3-4+… the regularized sum converges to ¼. Or maybe you could also say that regularization of the partial sums of 0 and ½ gives you ¼ as well. Abel called these answers and attempts “the devil’s work”. But nevertheless he devoted his mathematical life to it.
I thought of this when I watched the video too. Could there be 1st-order Cesáro, 2nd-order Cesáro and more? Is there any sum that does not converge even in the inf-order Cesáro but can be easily represented?
@zaffreknightx3445 The Cesáro summation is a limit where the arithmetic means of a series of n elements converges as n tends to infinity.
Take the Grandi series 1-1+1-1+... as in the video. The partial sums are 1 0 1 0 1 0 1 0 etc. The sequence has partial sums as well, 1 1 2 2 3 3 4 4 5 5 etc. Now look at the arithmetic means and you see it tends to a half. 1/1, 1/2, 2/3, 2/4 etc. It means that for every two elements the growth will be one, so on average a half. It's statistics...
This only works as one of the resulting sequences converges. Take 1+2+3+4+... where there will be no converge for any of the sequences, as it always tends to infinity, as do the averages. Thus no Cesáro "summation".
Adding on to what @jeroenvandorp said, there are infact iterated Cesaro summations, which can sum more complicated series as you would hope. For example, first-order Cesaro summation is unable to sum 1 - 2 + 3 - 4 + 5 - 6 + … but second-order Cesaro summation assigns it a value of ¼.
There are, however, still many series that cannot be Cesaro summed with any order. 1+2+3+4+5+… is one such example.
wouldn't 1/4 just be the first-order regularization anyway? the 0 and 1/2 came from regularizing HALF of the original series anyway (computing the sum of the odd-placed and even-placed elements separately), meaning that the "n" used for each sum is actually n/2. wouldn't keeping that into account make 1/4 the answer anyway? (0/2 = 0, (1/2)/2 = 1/4))
if this is wrong, i'm just not seeing how taking averages of two separate groups of partial sums using certain criteria and getting fittingly different answers implies that the series diverges; it's the behavior of the two separate groups together that defines the sequence.
@@nicholasmoffett4327 Hi, you can’t “split” infinite and divergent series or swap the order. A series you can regularize is not the same series where you change the order, if only two elements. And you can’t sum them, because they’re undefined as far as normal summation is concerned. Basically the “answer” here tells you something about the “progression” or “development” of the series. In this way you can make (meaningful) sense by assigning this value to it. Well, the video illustrates that neatly!
I love how this complements the new video about -1/12 on Numberphile
Fourier-Series have really gotten me into maths. I'm currently studying and I've used Fourier-Series in multiple applications: Analyzing signals (audio specifically), image processing and camera sensors. I was aware of Gibb's Phenomenon but didn't know that you can kinda fix it for practically finite sums.
Great Video as always!
There is an analog of this to process control theory. A dynamic linear system is “stable” if the state variables converge to a finite value. It is unstable if any one state variable does not do that. But then similarly, it felt wrong to say a state variable that just oscillates between two finite values is “unstable.” So when the variable does not converge to a single value, but you could define a finite bound that the state variable value never exceeds, then it is called “Lyapunov stability.” It reminds me of why your 1,-1,1,-1,… “converges” but 1,-2,3,-4,… does not. There is no bound you could define to contain all the values of the second series.
I came here from Matts final theory
For the Cesaro divergent series you can define the Cesaro^2 summation of partial sums.
Since applying the Cesaro sum to an already converging sequence does not change the value of convergence, what would happen if we just kept taking Cesaro sums of Cesaro sums until it converges? Like in the 1 - 2 + 3 - 4 + ... case, we get a sequence that converges to 0 for even values and 1/2 for odd values. If we took the Cesaro sum of that sequence, we would get a value of 1/4. If we take the limit of taking infinite Cesaro sums, would all oscillating sequences converge to some value?
I think the answer to your question is yes, but I will look into this more carefully later when it is not essentially midnight.
The answer is yes, but not in a very satisfying way: if you take the average infinitely many times, you end up with a constant sequence, considering of just the first term of your original sequence! (And this is true for ANY sequence, not just ones that oscillate)
To see this in action with the 1-2+3-4+... example, the sequences look like:
1, -1, 2, -2, 3, -3, ...
1, 0, 2/3, 0, 3/5, 0, 4/7, ...
1, 1/2, 5/9, 5/12, 34/75, 17/45, ...
1, 3/4, 37/54, 89/144, ...
1, 7/8, 263/324, ...
Every sequence after the third converges to 1/4, but notice that the second term of each sequence is rapidly converging to 1. This is in fact happening for every term: as you go further down, the beginning of each sequence gets closer and closer to 1,1,1,... and it takes longer and longer to leave 1 and start approaching 1/4. If you take the limit of these sequences, you end up with just 1,1,1,... and the more interesting convergence to 1/4 is totally lost.
@@japanada11Interesting! I hadn't considered that. I wonder if truncating the first several terms before performing the second sum would be enough to get the "sensical" result of 1/4
@obaked7013 That's a good idea, but you'd have to do some sort of truncation every single time; if at any point you stop truncating, then whatever your next sequence is, its first term is going to end up dominating in the long run.
One thing you could do, for example, is drop the first average you compute at each step (since it's just the same as the previous first term). So for 1 - 2 + 3 - 4 + ... you would get
1, -1, 2, -2, 3, -3, 4, -4, ...
0, 2/3, 0, 3/5, 0, 4/7, 0, 5/9, 0 ...
1/3, 2/9, 19/60, 19/75, 193/630, 193/735, 377/1260, ...
5/18, 157/540, 1013/3600, 3007/10500, 8303/29400, ...
where the first term in each sequence is the average of the first _two_ terms in the previous sequence (and the second term is the average of the first three terms above, and so on). Oddly enough, the data suggests that the sequences end up converging to a constant sequence with constant 0.28383382... I honestly have no idea what's going on here.
But one thing I can say for sure is that if you do this truncating thing, it's no longer true that you'll always end up with a constant (or even convergent) sequence in the limit. As an example, consider the series 1 - 4 + 14 - 64 + 362 - 2428 + 18806 - ... then if you compute the partial sums and then the sequences of averages you get
1, -3, 11, -53, 309, -2119, 16687, ...
-1, 3, -11, 53, -309, 2119, -16687, ...
1, -3, 11, -53, 309, -2119, 16687, ...
where again, the kth term in each sequence is the average of the first k+1 elements in the previous sequence. These sequences just keep flipping back and forth and clearly never converge! (These sequences are A000255 from the OEIS but with alternating signs).
Extending the Cesaro summation by infinitely repeating it would give 1/4 for the example which diverges using Cesaro summation. Is this a concept, where you can say a series converges if it converges for some finite N repetitions of Cesaro summation?
If I understand correctly, the Cesaro summation smoothing of the Fourier series is basically replacing each partial sum (the truncated Fourier series for some N) with an average of the partial sums for (1, .. N). I am curious and might investigate what the infinite repetition of Cesaro summation would look like. At a glance, it seems like it is shifting the weighting each term has to earlier and earlier in the sequence, which means this probably has limited applications for smoothing Fourier series (which by necessity needs to truncate at some finite value).
What you are talking about is called Hölder summation.
@@angelmendez-rivera351 Good to know, thanks!
Make a video about circle inversion
I was reading Dirac's 'Principles of Quantum Mechanics' the day after this video came out and he mentioned this way of regularizing diverging sums and because of this video I was equipped to face that!
I can see how Fejér's approximation can be very useful. The Gibb's phenomenon might lead to undesirable over correction in certain data models. I guess these are highly specific cases but I will file this under "weird knowledge acquired" and be happy about it.
Imo, the series in the title makes most sense to think of as simply describing an idealized square wave. Personally I think the notions of convergence and divergence need another term, indicating that it neither converges nor diverges: it stays static. The cesaro summation/convergence makes sense for it still, but that describes the convergence of the sum of the series, not the convergence of the series itself. The summation surely converges to 1/2, but to conflate that with the series itself converging is like conflating a formula with its derivative or its integral. Edit- Okay yeah and there's the point: if the series itself converges, then the series formed by its summation will converge too. That certainly makes sense and should follow logically from the rest of this.
glad to see unconventional convergence being tackled more and more! certain people will vehemently argue that you *can't* assign values to these series, but this demonstrates a fundamental misunderstanding of how maths works. infinity is much more complicated and beautiful than high school makes it out to be, and videos like these shed light on that beauty!
In the case of 1-2+3-4+... even though the first averages don't converge, alternative between 0 and 1/2, if we apply the same averaging technique again, it converges to 1/4. We could say that a sequence converges at level n if, when we apply the averaging of partial sums operation n times, but not n-1, the series converges.
Ahh, so we are damping the oscillations - neat.
Well, that opens a load of other possibilities. Any weighted average will also work as long as the maximum weight for any term decreases. So, given that the partial sums for small n are going to be the most disruptive if they get a lower weight then convergence should be more rapid but the calculation more complex and less intuitive.
Averaging is a finite impulse response low pass filter, which sort of explains it from a dsp perspective. This is a real interesting video!
1:28 Interesting, what is that sequence called? Interestingly it also seems to be the addition of every prime number sequence, in other words, 1/2 + 1/4 + 1/8... + 1/3 + 1/6 + 1/9... + 1/5 + 1/10 + 1/15, but I could be wrong about that it was just my first intuition.
Whoops! I actually meant to say 1/2 + 1/4 + 1/8... + 1/3 + 1/9 + 1/27... + 1/5 + 1/25 + 1/125 which is quite a bit different then what I posted, indeed my sequences got progressively worse as I was typing...
And with just a quick check I find this to be incorrect.
Interesting topic thanks.
Thank you!
@@DrTrefor❤😊
Also, what if you keep doing this cesaro sum but on itself, and keep doing that, if it does end up converging, you write down how many times you had to do that to get it to converge, and since that actually corresponds to the power of the total, you could even make it continuous, and since there’s often a continuous representation of these sums as well, we’d have an entire plane of ways to check if a list converges, which would be excellent for a complex plane, or maybe I’m reading too far into it because to do repeated infinite sums isn’t as easy to infinitely nest, even though their continuous cousin, the integral can, but the other problem is that there really only should be two different constant outputs for every series, the thing it actually converges to, and how many cesaro sums you had to make to get there, and what it actually converges to, but who knows? I know I’m definitely gonna have something to experiment with for a while
great vid👊🏽 ,
sir can we get a short course on ALGEBRAIC TOPOLOGY ?
ooh maybe. I do have one intro video on the fundamental group (with TomRocksMaths)
@@DrTrefor oh yessir , enjoyed that vid , so commented to post more on it
It converges to 1/2 as governed by the formula used to calculate the infinite series of summation of powers of any number x
it's either 1 or 0, never ½ so the only right answer is we don't know
if the average of partial sums ends up diverging for either odd or even terms, what's stopping us from averaging the odd and even together and Cesaro it again?
I almost mentioned this in the video but was worried about being confusing, but yes you can absolutely take the average of the averages - and so on. For series we often don't care about the sum toooo much compared to just the convergence, so your A and B yes have different cesaro sums but what I really care about is that they are both cesaro summable.
Nothing, except that's technically a new type of operation that is going to have its own properties and consequences.
Nothing really stops you from doing anything in mathematics, the question is just what that operation will help you with.
It becomes a rank-2 average sim, or a Cesaro C(n,2). For Higher ranks the process are similar.
Brilliant video. I’ve been waiting for a video on the square wave. Could you make a video on the saw wave, I’m trying to solve it via complex contour integrals but I’m having trouble defining the domain of the complex values function.
what about the average of the averages of the partial sums?
Well, Fejér's Approximation is nice and all - but what does occur in nature near discontinuities? IIRC oscilloscopes definitely show the "Gibbs spikes" near those.
Discontinuities don't occur in nature.
It's been a long time, but IIRC Abel summation uses lim_(r->1+) sum_n{a_n r^n}. I think it agrees with Cesaro when that converges, but sometimes does even more?
Abel summation is stronger than Cèsaro summation, yes. In fact, you can say more: for Nørlund means in general, Abel summation is stronger.
Hi, please suggest some easy book to understand differential equations and dynamical system. I find L Per ko quite tough.
Maybe you could do a video on .9 repeat.
.9 repeat is not infinitely close to 1, it is not as close to 1 as you want or even imagine.
.9 repeat = 1 That "=" sign is there for a reason. Equal equal and one more time - equal.
Is there an English version of this?
I don't have much to say like the other comments but this is super cool! will continue following this channel :)
Please... Can you say about Padé approximation?
Neat ideas! I guess mathematicians have also explored using other types of means in place of the average, e.g. the harmonic mean.
This type of summation seems to have value and makes sense only if the original series actually converges in the first place - like in the Fourier example. In the other cases, it also seems that in order to produce some finite value the divergence is of the type where each term bounces above and below some in between finite value. And in this case then I'm not sure it really means anything worthwhile. Having said that, it may be the the A(n) series convergence may give some clues about how S(n) diverges.
Couldn't you just do a Cesaro Sum of a Cesaro Sum? Eventually you'd end up at one that converges, and you could even define the original sum as converging to that result by treating the regularity as an analytic continuation, no?
Then, the real question is, are there any sums that will never converge, no matter how many times you do the Cesaro Sum? These could then be regarded as the "true" divergent summations.
Why is it "average of partial sums" and not "average of first n elements"?
This violates the "regularity" property that if the original sequence converges the new concept converges to the same thing. The reason why average of the partial sums is so nice is that say the first term appears in n of the partial sums and is divided by n so gets a "Weight" of 1 which is appropriate for its contribution to the sum.
How would the average of the elements help you find the sum???
If you take 1+1/2+1/4+1/8+…, the averages are: 1, 3/4, 7/12, 15/32,… (2-1/2^n)/n which goes to 0.
This does not help you calculate the sum in any way.
Very good video. Thanks for sharing.
Wait, so the Cesaro version of the Fourier series is just a Fourier series with different coefficients, right? And still it converges to the same original function? That feels wrong, non-unique Fourier series!
I guess in practical engineering terms, is the tradeoff that convergence is slower?
Fejer's sum feels like using Bartlett window in the frequecy space.
this "Gibbs phenomenon" is very similar to a common phenomena in the world of electronics referred to as "ringing". especially with signals like the square wave shown in the video which are extremely common in digital electronics. it's interesting how nature behaves similarly to the Fourier approximation.
Do Fejér sums just amount to limiting the high frequency response of the system? i.e. a low pass filter of some sort?
Now, can we get a convergent Cesaro^n from every series ? By Cesaro^n I mean to Cesaro the Cesaro n times. I hope what I meant was clear
You say the new approximation is better, but not how, except maybe it “looks” better. The normal Fourier series approximation of the square wave is the best LSE version. Also, it appeared form your graphics that at the location of the max errors (ie Gibbs peak) the error is the same in both approximations. This implies (visually) that the L_inf error is the same
This really depends on the application as to what "type" of error is more problematic. With Fejer approximations the error falls off in the right direction, i.e. dropping down when the discontinuity jumps down, while Gibb's phenomenon overshoots and jumps up.
@@DrTrefor yes agreed, it clearly depends type or error criteria one is concerned or optimizing against. I dealt with digital filter design for a few decades and was frustrated with folks who despised the Gibbs overshoot and would choose to smooth it out when they often implicitly cared about minimizing signal energy error and the solution with the Gibbs overshoot was the best solution for them. All that said, does the solution you presented minimize some specific error measure? If so, that would be interesting/cool. Thanks
Just do the average of the averages... 1/4 😂
The Gibbs phenomenon looks like a case of underdamping. If this analogy works, I wonder whether the Fejer approximation is critically dampened or overdampened.
IMO Abel summation is what's called for. As long as someone understands power series, they can grasp it.
No, Abel summation and Cèsaro summation are different operators.
@@angelmendez-rivera351 True. AFAIK, though, Abel extends Cesàro and IMO can be more easily applied to sums such as 1-2+3-4+..., since we don't need to iterate.
Again, these views are my opinions, and I think we all know various sayings about opinions...
@@tomkerruish2982 Abel summation is indeed stronger than Cèsaro summation, but as far as motivating definitions is concerned, when averaging is involved, Cèsaro summation and Hölder summation are appropriate, or Nørlund means in general.
The alternating series of 1 and -1 isn't a divergent series, it is a NON-convergent series: "non-convergent" and "divergent" are not synonymous.
In the standard definition they are synonymous.
"+1-2+3-4+5... doesn't converge even with Cesàro summation"
Abel enters the chat...
there is something more important going on: recursion. you have a value that both solves for a finite value, but also expands to a pattern. it has many applications.
A = 1 + x A
(1 - x)A = 1
A = 1/(1-x)
= 1 + x(1 + x A)
= x^0 + x^1 + x^2 + ... + x^n + x^{n+1}A
A[-1] = 1 + -1 + 1 + -1 + ...
= 1/(1-(-1)) = 1/2
see? the 1/2 has NOTHING to do with averages.
evaluate its derivative at -1, and you get 1/4
[d/dx]A = 1/((1-x)^2) = 1/4
= d[x^0]/dx + d[x^1]/dx + d[x^2]/dx + ...
= 0 + 1 (-1)^0 + 2 (-1)^1 + 3 (-1)^2 + ...
= 0 + 1 + -2 + 3 + -4 + 5 +
....
and take this a bit farther and you show the -1/12 = 1 + 2 + 3 + 4 + .... result in a very rigorous way. it's a bit long to show here.
it is most related to recursive sequences like this:
S = -1
(1-2)S = 1
S = 1 + 2 S
= 1 + 2 (1 + 2 S)
= 1 + 2 + 4 + ... + 2^n + 2^{n+1}S
= Sum[S,n] + 2^{n+1}S
S - 2^{n+1}S = Sum[S,n]
S(1 - 2^{n+1}) = Sum[S,n]
(-1)(1 - 2^{n+1)) = Sum[S,n]
2^{n+1} - 1 = Sum[S,n]
ie: we just derived a closed form for n expansions of the sum 1+2+4+...
it is a very general phenomenon.
Watch it deal with 1.0 = 0.99999.... easily:
T = 9/10 + 1/10 T
(1 - 1/10)T = 9/10
(9/10)T = 9/10
T = 1
what is happening is that infinite strings of digits can be alternate representations for finite numbers!
...9999999 = -1
ie: an infinite string of 9s plus 1 equals 0. it's -1. and this is exactly how 2s complement computer arithmetic works. it wraps around so that for the group, negative and fractional values are simply represented as values with an infinite number of digits.
X = 9 + 10 X
= 9 + 90 + 900 X
= 999999999.....9999
= 9 + 10 X
(1 - 10 ) X = 9
(-9)X = 9
X = -1
it only relies on the use of the equals sign meaning "A=B says A can be rewritten as B, and B can be rewritten as A". there are no appeals to infinity at all.
note this:
-1 = ...99999999
-2 = ...99999998
-3 = ...99999997
ie: -3 + 3 = ...9999997 + 3 = 0
Y = 3 + 10Y
= ...3333333333333
(-9)Y = 3
Y = -1/3
prove it... multiply it times 3
3 Y = 9 + 10 (3Y)
3Y = ......999999
this is a lot like what happens in computer bits in a register. negatives and fractions are represented in binary with the same bits as the positive numbers use.
a representation of -1/12 ... if you show it in base13, then it's a string of 1 digits.
V = 1 + 13 V
(1 - 13)V = 1
V = -1/12
so when you get a proof that 1+2+3+4+... = -1/12 ... it means that it's the same number as the base13 number with 1 in all digits.
the lesson is that numbers with an infinite number of digits can be alt representations of finite numbers, ie: infinity isn't a number.
if you can't accept it, then consider this:
S = -1
-1 = 1 + 2(-1)
= 1 + 2 + 4(-1)
= 1 + 2 + 4 + 8(-1)
S = 1 + 2 + 4 + 8 S
= 1 + 2 + 4 S
= 1 + 2 S
Fejer's approximation looks worse to me than the naive fourier. It is worse to persistently be 10% off than to have a few oscillations of about 10% near the discontinuity, and the graph zoomed in shows that Fejer's approximation is near the deepest trough of the Gibbs error region instead of averaging out like a good approximation would do.
Ultimately it depends on your specific application what kind of errors are more or less problematic. The Fejer's approximation drops off in the "right" direction, going down when the jump drops down vs Gibb's phenomenon which overshoots upwards.
@@DrTrefor sure, but I think it would be better to show an example where Fejer's approximation was better instead of one where it was differently bad.
hello dr the name cesaro is italian so the ce is pronounced like che as in check and not se
This is the kind of the same question as "is infinity odd or even?"
Neither as infinity is not a number
Love your videos
thank you so much!
I prefer to use the mode of the partial sums.
Although, an example of something that will never converge under this system does exist, in fact I can name a specific function, it’s sin(round(x)), so simple, but I don’t believe it ever actually converges in this system, due to its pseudo random nature, I don’t know for sure, will again, have to test.
We ware taught this in terms of Fejér and Dirichlet kernels.
I thought you were going give another video on a weighting function similar to Numberphile's recent video on the zeta(-1)=-1/12 where Tony uses a regulating function to get the answer without the infinity. ua-cam.com/video/beakj767uG4/v-deo.html
With the right filter all sums converge, now the question is if it will be useful. :p
Hi our best teacher I really need your help I have question and I did not find solution for it if you can help me
An infinite mean is actually almost the same as an integral
geometry dash
Geometric series never conveges and never reaches 1
Let's say we have 2 infinite series:
A = 1-1+1-1+1-1+1-1+....
B = -1+1-1+1-1+1-1+1-...
can we say that the two infinite series are equal or not (we're just switching the order of every pair)?
what if we look at it from the point of view of partial sum and Cesaro summation?
The Cesaro sum for B is -1/2. And both 1/2 and -1/2 are really unsatisfying to me, because 0 is the more prominent candidate for some sort of an average of the series. It's almost an expected value of the series, but, well, the terms of the series are not random really.
@@NoName-cp4ct you get 0 when averaging both A and B... so it almost looks like to get the "nice" (0) answer to alternating series like this "both ordering combinations must be considered"
@@NoName-cp4ct How is 0 the more convincing for the average? You are effectively averaging 0 and 1.
I do not like this title:
1+1-1+1-1+1.... is not a sequence, hence I do not understand how one say it does or does not converges. Maybe we think of it in term of the sequence of partial sums, but in this case this is just the sequence 1,0,1,0,1,0,1,0,1, and this one shouldn't raise issues.
For this reason, a better title would be " making 1+1-1+1-1+1-1... converge" or smtg like that.
Please tell me how wrong I am, it would a pleasure.
It converges to zero, if it repeats exactly as typed in, since the last MUST be a -1
Hello old uncle once again
if I said 1-(1-1)-(1-1)-(1-1)... my series converges on 1 very quickly.
It's a different series though
@@Ryrzard@Ryrzard, with a little algebra, I can show this series equals 1-1+1-1......
@@WagesOfDestruction Rewrite it as a summation then
Arthur Watton would be spinning in his grave if he was dead
I love the prove that the harmonic series doesn't converge... i always present this to people and they love it. Or at least they fake ir. Kkkkk
0.9999999999999999999...=~ing.
a square paper also papering,too?
Not Universe but also universing,too?
We couldn't define smallest cell size of a square paper.
Actually it is a smallesting?smallesing largesting couldn't define one.
Ummm what?
This is no Maths anymore.
Came here to say, the answer is "No." 1-1+1-1+1-1+1-1+... does not converge.
Same, don't know how the video is 13 minutes.
I'm not good at math but I have a friend who told me that the sum 1-1+1-1+1-..... has the same result as the limit when n approaches infinity of the function 0^^n
This would be incorrect. The partial sums of the Grandi series are given by 1/2 + (-1)^n/2, not by 0^n.
1 or 0 so it is divergent(not convergent).
… but the infinite sum 1-1+1-1+1-1+… 'converges' to set {0, 1}
... in the sense that lim sup Sn = 1 and lim inf Sn = 0
@@D.Hilbertand in the sense that it is the set of limit points, which tells you more about the series than just the limsup and liminf
Depends on your axioms and applications, but both can be correct. Your answer requires us to say that convergence can be a set and not a singular value. We also lose some properties of the Cesaro and its extensions-we could say that 1-2+3-…=1/4 with the point-valued extension, but no set-valued convergence exists. But we could extend it further and get some kind of “function-valued” convergence. It’s all a matter of choice and what properties you are exploring of different types mathematical objects. It’s why we choose to leave x/0 undefined in elementary algebra, but there are many ways it can be defined in more specific, self-consistent algebraic fields (e.g., calculus vs. extended reals vs. Riemann sphere vs. hyperreal/surreal infinities&infinitesimals).
Also see: the inverse function of f(x)=x^2 ->g(x)={sqrt(x),-sqrt(x)} when we consider set-valued functions to be acceptable instead of only point-valued functions.
@@ClementinesmWTF very well said
Lol no.
Answer, no. It is not a convergent series. Being off screen doesn’t change the nature of the series. We can use certain methods to obtain an associated value that is helpful. The series still diverges to infinity.
nope. but you can make up your own definition of "converges" and then pretend real hard. that sort of thing is in fashion.
Absolute nonsense.!
1/2 + 1/4 +1/ 8 and so to 1/512 + 1/1024 and so on, can NEVER EVER get to magical number 1.
There will ALWAYS be that very tiny minuscule bit to be added later on, but adding that little bit or the NEXT little bit, and the little bit after that will always leave a little bit that won't make the whole total to be EQUAL to the theoretical 1!
3Blue1Brown did an excellent video proving my very point. Go look it up.
Yes, the sum of the series is 1. What you're saying is nonsense
Go look up the definition of a limit
You need to understand the definition of infinite summation.
We first define Sn = sum from k=1 to n of (1/2)^k.
Now we define sum from k=1 to infinity of (1/2)^k to be limit as n approaches infinity of Sn which converges to 1, QED.
It sure can get to the magical 1 - you just add all the elements in the infinite set.
By your reasoning .9 repeat is not equal to 1. But they are equal in the true sense of being equal.
.9 repeat is a real number and 1 is a real number. They both correspond to exactly the same point on the real number line. Hence they are equal. This whole idea of not being able to physically add up an infinite number of values in a finite time misses the point entirely. It is possible to take the infinite set as a whole and have an infinite number of positive values add to a finite sum. If it didn't, reality in this universe would not exist.
If 3B1B claims that 1/2 + 1/4 + 1/8 + ... 1 then he is wrong on this one. It's that simple.
@@islamicguidance7204 You are bang on. I've had this discussion a thousand times to no avail. There are simply some who will just never get it no matter how many times you try.
en.wikipedia.org/wiki/Fej%C3%A9r_kernel