I'm mesmerized by how intuitive you made the theorem seem. I always felt it was sort of like a "paradox", but your explanation made it look almost plain obvious. Great video!
@@0x6a09 No it wasn't. You were lucky enough to have a good teacher who made it obvious to you the first time the subject was introduced. *Not everyone is so lucky.*
@@General12th How are my math teachers related to this? They never talked about this. But they talked about limits, and i think this is enough to understand this theorem.
My strategies for the extra problems: To make the series oscillate, instead of having one target sum, make it two, for example 1 and 0. First take enough positive terms to get the partial sum above 1, then take negative terms to get it below 0, then repeat. This way the series will have infinitely many partial sums both above and below the interval [0,1]. To make the series diverge to infinity, use the same strategy, but make the target sums increase by 1 each time they are reached. I.e. first take positive terms to get above 1, then take negative terms to get below 0, then get above 2, then below 1, then above 3, then below 2, etc. Since the negative terms converge to 0, the sequence of partial sums will have an increasing lower bound that will go to infinity.
Corollary: you can get S from infinitely many rearrangements.Just pick an arbitrarily long (finite) sub-sequence, and the remaining terms still form a conditionally convergent series, which you can rearrange to get a new target sum of S minus the partial sum from the sub-sequence.
I agree with your approach on the non-converging sequence. For the sequence converging to infinity I like your approach, but you can do a little better: whenever you cross your threshold switch to negative numbers until you go below it. Then increase your threshold (you added 1 but you could also multiply by 2, square it, or use any other sequence that will diverges to infinity) and repeat. You will use your negative terms much more slowly but that doesn't matter. As long as they all get used eventually...
Does that work for oscillation, though? The reason the rearrangement works normally is because the positive and negative values approach zero. The deviation of the partial sum from the target sum decreases and eventually converges to zero. To oscillate like that, the deviation is at LEAST one, and never gets smaller.
@@williamrutherford553 Since the sum of the positive terms diverges, even though the terms converge to zero, you can always take enough of them to pass the upper target. Same for negative terms and the lower target.
I think with diverging you can use the successive partial sums of any divergent series as the upper target sums, and any sequence of numbers that's bounded above and below as the difference between pairs of upper and immediately following lower target sums. There may be more generalizations to be made here; it seems that for the upper and lower target sum series you have to pick two series such that the differences of the corresponding pairs of terms both don't converge to zero and don't diverge to infinity, and that one of the series themselves diverges to infinity (it is obvious that the other one does if these conditions are true)
Sometimes I get frustrated by how you always state the obvious (you do it very slowly, of course). Then I realise that I would never have come up with what is "obvious" in the middle of the video had you not told me about what is "obvious" previously. And then I just realise that that's how math works! You just state the "obvious" and you come up with more "obvious" statements. This proves how good of an educator you are, great job.
There's a legend that Isaac Newton invented the cat door. A door within a door. So obvious in hindsight anybody could come up with it. But it took the world's greatest genius to actually come up with it. Now, this never happened, but this kind of thing happens all the time. Genius is putting together the novel out of the obvious.
@@angelmendez-rivera351 look at any body's top 5 or 10 list of greatest geniuses of all time l, and Newton will be on every one of them, often at the top. Who else compares? Arhimedes, Einstein, John von Neumann?
The list of geniuses throughout history with a great list of accomplishments is very extensive. However, if they were ordered by the magnitude of their accomplishments then Newton, Gauss, Euler, Einstein, and Von Neumann are at the top of the list (at least among those who lived recently enough for us to be sure of their accomplishments).
@@angelmendez-rivera351 that list is missing archimedes and Leonardo Da Vinci, Euler was superior to Gauss in terms of achievements, and John von Neumann may have been smarter, but didn't accomplish as much as Fermi or Planke. And Stephen Hawking certainly deserves a spot on the list.
I remember learning about conditional convergent series when taking calculus class and it always felt "why do I care if a series is conditionally vs. absolutely convergent". Your video answered my long-standing question. Thank you!
Using the fact that ∞ + x = ∞ For any real number x, We can get ∞ = ∞ - x ∞ - ∞ = - x And we can replace -x with y; We can say that y = -x and y belongs in the set of real numbers, now we have ∞ - ∞ = y. And y can be any real number, because for any y you want to choose, there's an x that will get you that value.
fun fact: for the (-1)^n/(2n+1) sequence, the series exactly equals arctanh(p)/2 + pi/4 where p is the proportion of negative and positive terms, ranging from -1 (all negative terms) to 1 (all positive terms) For example, the ++- pattern converges to arctanh(1/3)/2 + pi/4 I would love to see a proof for why this is the case, because it seems too simple not to have an elegant reason for being that way.
You can also just say that the 1/1+1/3+1/5+1/7+... approaches 1/2+1/4+1/6+1/8+...+a finite constant= C+ln(n/2)/2, after numbers lower than n. If half of them are missing, n is half as small, so we are at C+ln(n/4)/3=C+ln(n/2)/2-ln(2)/2. I instantly came up with this proof when I saw the video title, so this is very natural and simple.
-1 isn't all negative terms, it's just positive terms are very rare and become more rare after more terms and they are 0% of negative terms. And same for 1
For the infinity rearrangement try this. Suppose that the up and down arrows are sorted in terms of decreasing length.What you do is add on the first down arrow. Then add enough up arrows such that the total sum of all the arrows is bigger than 1. Add the second down arrow, then add enough up arrows to make the total sum bigger than 2. Add the third down arrow then add enough up arrows to make the total sum bigger than 3. Add the 4th down arrow then add enough up arrows to make the total sum bigger than 4. And so on.
If UA-cam allowed double likes I would have given them to you....that passing comment about L_2 spaces made my day; it makes so much sense - invariance of inner product under space transformations
My idea for one oscillation algorithm: Sort each subset (up and down arrows) by size, as done in the video, use the arrows in order of decreasing size. Start at zero (use this as a new minimum), then use the first up arrow to create a new maximum. From here on, after reaching a new maximum, add down arrows until you exceed the previous global minimum. And every time you reach a new global minimum, switch to adding up arrows until you exceed the previous global maximum. This will have the series oscillate with increasing amplitude - because every time you go down, you go down to an all-time low, and every time you go up, you go up to an all-time high. Because the series of up and down arrows each diverge when viewed independently (see video), it will always be possible to add enough arrows to exceed the previous maximum/minimum.
Would that result in a series that switches between positive and negative infinity? For instance, imagine that each time you switch directions, your new y-coordinate becomes something like n=1-2+3-4+5-6+... The sum, n, keeps switching between increasingly large negative and positive numbers. I think if you add a restriction like "|n| should always be approximately equal to a constant k", then you can also make it just oscillate between finite numbers. For instance, for k=1, you might oscillate between n=-1 and n=1 in the limit. Similarly, to rearrange the series to make it blow up to infinity, I think you could just make n monotonically increasing
A minor mistake in the argument at around 8:52 for about why the added terms must converge to 0. It is not because otherwise it must diverge to infinity, but because otherwise it must diverge to infinity OR by oscillating (e.g 1 -1 1 -1… series does not diverge to infinity.)
Yep, you are right. And good catch! I was actually aware of that before, but I decided to leave the argument as is (while gritting my teeth) since it was a sidenote focused on intuition that I wanted to keep brief, and I felt covering that edge case might distract from the main video's thrust.
This was an incredibly lucid explanation. I think I’m actually going to try to teach this to the students in my math-for-art-students course. Before seeing this video, I wouldn’t have dreamed of trying to explain something like this to them, but if I replicate your explanation, I think some (hopefully most) of them might actually get it!
I wish I'd had this video back when I was taking calculus. They never explained *why* to care about conditional vs absolute convergence, so the problems about determining conditional vs absolute convergence just felt like splitting hairs.
Oscillating sequence: add positive terms until the partial sum is greater than 1, then add negative terms until the partial sum is less than 0 Positive infinity: add positive terms until the partial sum is greater than 1, then add the first negative term, then keep adding positive terms until the partial sum is greater than the next integer followed by the next single negative term (negative infinity can be reached by starting with the negative terms)
I don't know what about your channel is different, but you've helped me understand these paradoxical maths things better than anyone else! Both this and fractional derivatives, finally explained understandably
I think the strength of this channel is that the explanations don’t require the viewer to hold many unexplained pieces in place for unknown reasons, waiting for the thing that ties them together.
@@ericpmoss the video on fractional derivatives kind of does require the viewer to do so a little bit. The viewer is referred to another video for some things in the integration part, and the gamma function is not defined.
I think this one might be my fav one so far! Amazing job! the animation you have made it really to follow along and helped me understand the concept a lot!
For divergence to infinity, get the arrows to build a stair case by placing up arrows until they’re over 2 and then down arrows until under 1, then up arrows until over 3 to 2 to 4 etc. All the arrows must be used and it diverges to infinity. For a divergence by oscillation, pick any two real numbers a,b with a>b (WLOG). Place up arrows until they’re sum is greater than ‘a’ and then place down arrows until the sum is less than ‘b’ and then repeat. The arrows are always enough to get from a to b and back again and oscillate between the two numbers.
19:40 My take on your problem: 1-So a series diverge by oscilation you will set 2 goal lines, one that need to be transpassed in each phase ( upward and downward ). 2-To diverge to infinity do the exactly same thing as diverging by oscilation, but move each target up or down by some amount on each cicle.
Elegant. The reason this seems counter intuitive is because we forget that two infinities are not always equal. If you pick one pos and one neg number each time, that is a different infinite series than taking two pos and one neg each time. They will not converge to the same number.
Subscribed and watched every video. Great job with the visualizations and distilling the salient features and ideas of the proof into something not just manageable but intuitive, all without any handwaving. Keep going!
I just found this video a few seconds ago and i must respectfully give you a compliment by saying you have an outstanding sense of humor because i giggled and laughed and rewinded the intro several times. Are you also a comedian?
The production quality for the first video is insane! You totally matched mathologer's video quality on this one. (He did a video on the same topic a few years ago)
Superb video! To summarize, the trick is we have changed the underlying distribution of the negative numbers vs. that of the positive numbers. In other words, how often the often occurs with respect to the other, leading to the different result.
The to infinity construction is fun. You have two types of steps: even steps and odd. At each even step you enumerate arrows to get to n - x where n is the step you're on and x is the next negative number. Then at each odd step you enumerate a negative number. The cleanest construction I can think of.
This has made me think of the axiom of choice and how careful one has to be to be confident of infinite proofs using just cardinality ignoring order, see Cantor and Gödel.
Diverging to infinity: Consider the length of the largest red arrow, and place a number of green arrows that add up to more than twice that length. Now place the longest red arrow. Continue in the same pattern. Of course red and green can be switched here to make negative infinity
I think for positive infinity, all you need to do is add green arrows until it’s longer than the next red arrow, not twice as long. It doesn’t need to diverge *fast* it just needs to diverge.
@@titaniadioxide6133 nope you need the factor of 2, since the length of the red arrow decreases but by choosing this factor of 2, you're actually having sums of the lengths of the red arrows as bound, rather than individual lengths themselves
@@helloitsme7553 but so long as you are always going farther up than the last arrow took and the next arrow will take you down, won't you always keep going up? Plotting all of your maximum values you might end up with a graph that looks a bit like ln(x) where it nearly flattens out, but so long as it never does you will still approach infinity.
@@GhostGlitch. Always increasing doesn't mean you'll diverge to infinity. For example the sequence 0/1, 1/2, 2/3, 3/4, 4/5, 5/6... is strictly increasing but converges to 1. He mentions this at 14:22 with a geometric series as an example.
I tried to solve your challenge and here’s what I came up with: To make a series converge to nothing, just add up-arrows until they exceed 1, then add down-arrows until they’re below 0, then add up-arrows again until they reach 1 and so on. (Obviously the numbers don’t have to be 1 and 0). To blow up to positive infinity, add up-arrows until you hit an integer (k), then add down-arrows until you’re below k-1/2, then add up-arrows until you reach k+1, then down-arrows until you’re below k+1/2 and so on. There’s probably a much simpler solution, but this works fine. Negative infinite is just the opposite strategy.
Positive infinity: let nᵢ be the i-th negative number. Add positive to you get above 2n₁, then add n₁, continue adding postive until you get to above n₁+2n₂, then add n₂. This will create a series that will be higher than of |n₁|+|n₂|+... which diverges. The same method can give you negative infinity. Swinger: let p > q| -> add postitive til your sum S ≥ p, then add negative until S ≤ q and go back to ->
What strikes me is that while you can prove that there will always be some re-arrangement of a conditionally convergent sequence that will generate any real number you choose, it doesn't mean that finding that precise rearrangement will be that easy in reality, as virtually all real numbers will be generated by a rearrangement of the sequence with no regular pattern, in which you simply need to know an infinite number of terms, rather than be able to infinitely generate them from a pattern (i.e. if you randomly choose a target number to converge on then it is almost certain that the rearrangement solution will not be of the form 'a times up followed by b times down, repeated infinitely many times' or anything of that sort).
Fun fact: using the same construct you can actually find a sub-sequence of the partial sum sequence adjacent to any sequence u(n) you want. If you call S(n) the partial (rearranged) sequence it mean that you can find a rearrangement and a sequence k_n such that S(k_n) - u(n) tends to zero. The sum will therefore approache every term of the sequence, getting better and better every time. Even more stronger, you can control how it tends to zero: if you fix a threshold ε>0, then you can make so that |S(k_n) - u(n)|
Way to approach infinity: rearrange it into many parts, some of each part larger than a positive constant (for positive infinity) or smaller than a negative constant (for positive infinity), so eventually we will get an infinite series of positive and negative terms, where all are larger than or smaller than the chosen constant, so the sum will diverge Way to approach nothing: choose any two constants, let's call the bigger one k1 and smaller one k2 (i.e. k1>k2), then repeat the process that adds positive terms until sum >k1, then adds negative terms until sum
Oscillation solution: Pick two values, we'll say 1 and 2, then add positive numbers until you pass the greater value (2) then add negative numbers until you pass the lesser value (1). This will oscillate between 1 and 2. Infinity solution: Add a negative number from the list, then add positive numbers from the list until the current run of positive numbers is greater or equal to double the last negative value, the end result after these two steps will always be at least the absolute value of the sum of the negative values (infinity). Positive and negative may be swapped to get negative infinity.
Here's my ideas: For +∞: Sum all the negative numbers, then add the positive ones. Vice versa for -∞. (At least I think that would work). Divergent: First place the largest positive arrow (or number). Next place negative arrows (starting with the largest) such that they are longer than the positive one in total. Then place positive arrows such that they are longer then the previous negative ones together. Repeat for all the numbers.
Imagine a dancer who takes one step forward followed by one step backward forever. Then rearrange their steps so instead they take two steps forward followed by one step backwards forever. Somehow in my example it is not surprising at all that the dancers end up in different places after you repeat their moves an arbitrary number of times, even though you can rearrange the infinite sequence of moves of one dancer into the other's.
A good point. But then you might find it surprising that this strategy doesn't work on all series. An absolutely convergent series like 1 - 1/4 + 1/9 - etc. consisting of alternating reciprocal squares will not change its sum no matter how you rearrange it. To pursue your analogy, whether rearranging the sequence of steps in a dance affects the long-term position of a dancer depends on the kind of dance they're doing!
that is the thought I had. after 10 000 steps, or as in the video, summing the first 10000 terms, you are no longer summing the same terms. For the infinite summation you could say the infinitely many terms are somewhere in both sequences, but for any intermediate value like adding the first 5 or 10 terms, you are not adding the same terms. Even if you took a google term, it is still very small amount of term compared to the infinitely many terms that exist in the sequence. Convergence is an hypothesis on the answer derived from observing how summing the starting terms behave, if you are not adding the same terms then you won't get the same result.
@@morphocular what would the sum be, if instead of doing + + -, you were to put every negative term first, followed by all the positive term ? I'm not sure you'd even place a single positive term as there are infinitely many negative term in that sequence to be placed first.
I have answered the bonus questions of making conditionally convergent series approach infinity or oscillate. I could be wrong though. A.) Approaching an infinity: Step #1 Keep in mind the magnitude of the next largest negative value. You can have it as a variable (I'll call mine N) Step #2 Add enough positive values until the net is greater than N Step #3 Add N Step #4 Get a new N value Step #5 Repeat Steps #2-4 for eternity Because the positive series values diverge, we'll always have enough positive values to reach our goal. Additionally, because we are doing thing forever, every negative value will be used B.) Oscillate Step #1 Set up your goal points for oscillation (Mine are 1 and 2) Step #2 Add positive values until you reach or exceed the greater valued goal point (For me, it's 2) Step #3 Once that value is obtained, add negative values until lesser valued goal point is reached or exceeded (Now, my value is 2) Step #4 Repeat Steps #2-3 for eternity Because both negative and positive terms diverge, there will always be enough to reach both goal points
@TheMoped1000 Yes, thank you for correcting me; however, I am too lazy to edit it. If seeing my comment is that much of an eyesore, you can just "ctrl+shift+i" it away
My strategies: Divergence: as soon as it crosses the value S_1, switch to a new value S_2, which it'll "travel" to until it reaches S_2, then when it reaches S_2, switch to a new value, and so on and so forth. Positive and negative infinity: I have no idea
It's actually pretty simple. If the series converges absolutely, so do the positive and negative subseries. But in a series where all elements have the same sign, rearrangement can't change the sum. Conversely, in a conditionally convergent series, both the positive and the negative subsequence go to infinity. Hence, you can approach any value by just picking negative elements when the partial sum is above the target and positive values when below. You will always pass the target because you have an infinite budget and you will get closer and closer because the elements get arbitrarily small.
I came up with one way to approach infinity. For each down arrow, pop enough up arrow until the magnitude is more than twice that of the down arrows. Since this meta-addition will result in more than the total magnitude of down arrows, the sum would approach positive infinity. Same can be applied to approach negative infinity.
The crux of this is that conditionally convergent series are just discrete ways to write out infinity - infinity, which you know to be indeterminate, and can take on any value when given a concrete representation. It's not entirely obvious this is the case initially, but once you realize that the conditional convergence implies P+N = finite, but P-N (or P + abs(N)) is infinite, it becomes pretty clear imo For the end challenges, you could simply set a series of targets to reach, for +inf let's say we aim for 2, 1, 3, 2, 4, 3, 5, 4,... , for -inf simply the negation of all those terms, and for divergence we can do 1, -1, 1, -1,... . In all these cases, there's a total infinite upwards movement and downwards movement, so all the arrows will be used.
That's actually quite important in quantum mechanics! When switching from Schrödinger's to Dirac's picture of QM, one substitutes the concept of "wavefunctions" with that of "state vectors" living in a Hilbert space that is (for most problems) infinite dimensional. In this case the usual representation of matrices and vectors is not very intuitive, since an infinite-dimensional matrix cannot even be written down. Nevertheless the maths behind it still works in the conventional ways, and the definition of scalar product between vectors of infinitely many components is coherent and most importantly finite. Turns out that the dot product of a state vector with itself is just the infinite sum of the square magnitudes of the Fourier coefficients needed to build the Fourier expansion of the wavefunction in the chosen basis. So yeah, a dot product between infinite dimensional vectors actually represents a converging series, and this fact has physical importance!
my take: convergence to infinity: Separate the sequence into positive and negative numbers, sorted by value (like in the video). Use positive numbers until you get above 2, then use negative numbers until you get below 1. Then use positive numbers until you get above 3, then switch again to negative numbers until it falls below 2. Once again, use positive numbers to get above 4, and use negative numbers to get below 1. At any step you use the positive numbers to get add ‘+2’ to the sum and then you ‘take away’ 1 with negative numbers. The conditional convergence assumption should make this possible (since both the sequences of positive and negative numbers are divergent). Example where it doesnt converge: Same setup as before and as in the video: two sequences of positive and negative numbers, sorted. Use positive numbers to get above 1, then use negative numbers to get below 0, and repeat.
You can get infinite or finite divergent sums by changing the target line to some other function which has a strictly smaller derivative than the lower bounding function of the terms (or the negative of the negative terms). If Aln(n+1) is strictly less than the sum of all positive terms, then Bln(n+1) is a valid target line iff B
By adding the positive terms more quickly the negative terms, you're causing the positive component to approach infinity more quickly than the negative component. When dealing with limits at infinity, the speed at which infinity is approached also matters, like the limit of f(x)-g(x) where both functions are approaching infinity, but not the same infinity.
19:18 That makes sense. Instead of trying to try to approach the line y=S, you can make the arrows approach the line y=Sx. This is just a linear equation which at x=inf has the value y=inf. What x is in this case is the count of how many times the target value line is crossed, not how many arrows are taken into account. Otherwise it would not be possible to stay on the line (I think). In that same way, you can make the arrows approach y=sin(x), to have the series not approach anyting at all.
For a more general proof, replace the constant target number with an arbitrary infinite series: it literally doesn't matter what the series is as long as every term in the series is a real number. Starting with the first number in the target series, apply enough up or down arrows (one or three other; not both) to cross that value; then advance to the next value in the target series and repeat the process. The difference between your partial sum and the value of that target series will converge to zero, which can also be phrased as your infinite sum converting to the infinite series. Meaning that you can make the infinite sum become infinitely close to the behavior of the target series. So if the target series converges, so will your infinite sum; if it diverges, so will your infinite sum, and in exactly the same way: positive infinity, negative infinity, or forever oscillating. Or, if you have a more detailed means of measuring how the series behaves in the long run (e.g., hyperreal numbers or intervals), your partial sum can be made to match whatever the measurement of the target series is.
Here is my take: +inf: set the target n=1, sum up enough positive terms to get >1. Now take 1 negative term. Next, set the target n = max{2, sum+1}, so we will *have to* use at least 1 positive term to reach it, and we will get to at least 2. Continuing this, we know we use at least 1 positive term and at least 1 negative term each time, and we reach higher numbers. We also don’t oscillate too much, because negative terms (used only once at each step), go down to 0. -inf: use n=min{-k, sum-1} with k=1,2,… No limit: Just set the goal to +k when summing positive terms, and -k when summing negative terms. This way we will *have to* use positive and negative terms infinitely often. And one subsequence (of maxima) approaches +inf, and another (of minima) approaches -inf.
My approach for oscillating, choose S1, S2 in the reals so that S1 =/= S2. Without loss of generality we can choose S1 > S2. Then add up arrows until you pass S1, then add down arrows till you reach S2, then head back to S1, repeat forever. Again we can use the fact that the series is conditionally convergent to know we can always reach the other side. My approach for infinity, you could switch up and down for negative infinity. Start with a single up arrow, then add a down. From here on add the next N up arrows, where N is the number of term in the series so far, at this point you would add 2. Then a down arrow. Now you are adding the next 5 up terms, then a down term. Now 11 ups 1 down. 23 ups 1 down. 47 up 1 down. As you continue on, you will have more upward motion than downward. Unless the magnitude of the 1 down is larger than the N ups. However as the sum of positive terms approaches infinity (conditionally convergent) You will eventually add so many up terms no down term could compensate, as the down terms will approach 0.
I don't feel that it's particularly paradoxical to begin with. A (only slightly naive) definition of an infinite sum is the number that we approach as we take longer and longer partial sums. If you cram two times more positive terms into each sum as in the first example, it'll obviously make all partial sums bigger and since you changed the ordering for the whole infinite series, it hols for the infinite sum as well.
I agree, infinite sum is often defined as limit when number of terms tends to infinity. This means partial sums matter and moving terms makes two partial sums have different terms. So even if each infinite sum have same terms, partial sums do not have same collection of terms.
The sums always added up to the same value for the number of terms shown or upto 1/23 in all the three cases for me. When taken in the original order, when the adjacent terms are interchanged, and also when all the positive terms are taken together and all the negative terms grouped together.
That last challenge is pretty simple: start by adding sufficiently many green arrows such that the distance from the tip of the last one is strictly greater than the length of the longest red arrow. Add the red arrow, then go back to adding green arrows until, again, the distance from the last green tip to the target line is greater than the length of the second-longest red arrow. Rinse and repeat to infinity (and beyooooond!)
I'm not through the video but I imagine a good way to intuit the issue would be to take the definition of "trend towards" - you can add infinitely many terms and it will become "arbitrarily close to" the number it trends towards, and then take into account that at a given term in an oscillating series like the first example there's always counterbalancing terms before a given term, if you rearrange it so that those counterbalance terms show up less and less in a series you can make it bigger/smaller. And this is exactly what happened in the example which is why I write this down
This is undoubtedly a nice video and contains a lot of new ideas but I think the real challenge is how can these visual thoughts be turned into a formal proof . This is the real challenge I believe 👍❤
No it's not! The sum changes because terms significantly close to infinity are being discarded and being replaced by terms of the opposite sign. So while the first terms are all maintained, the last terms often get replaced.
@@Laff700 oh yes it is. of course you have to use up all the numbers there are, i.e. lose the pattern in the end - if inifinte was a number. but since infinite is a lose end (and not a number) you can *always* keep the pattern and thus yield different results
@@flatisland No terms at the BEGINNING are being omitted. That much was shown in the video. That doesn't prove that no terms are being omitted anywhere though. It's crucial to remember that not all infinities are of the same magnitude. For example, if a=∞h and b=∞(1-h), then both a and b are infinites. They aren't equal though as b/a=-1+1/h. The original series had an equal amount of positive and negative terms(or at least really close to equal if ∞ is odd). So a=b=∞/2. Lets say we now add 2 positive terms for every negative term(a=2b=2∞/3). Now we have omitted the last ∞/6 terms of negative portion and added ∞/6 new terms to the end of positive portion. Thus we don't have all the same terms as we did in our original series as ∞/6 terms have been changed. I found a general formula for the partial sums of the positive and negative portions and used this theory of mine to get the equation N=(Pi-Log[(-1+1/h)])/4 where the first ∞h positive terms are used and the first ∞(1-h) negative terms are used. My results agree with what's shown in the video at 2:00. In general, it's best to treat ∞ as an unknown variable which has an extremely high value. This line of reasoning is typically sufficient to coherently use ∞ in math, provided the modulus of ∞ isn't too important. The rules shouldn't all be thrown to the wind whenever an ∞ pops up.
This strategy can be used to rearrange the terms to approach any desired sequence, e.g. sin(n), ln(n). Once this is proved, the "extra" results are simple consequences of this fact.
@@angelmendez-rivera351 I misspoke slightly. The terms can be selected such that any desired sequence is a subsequence of the sequence of partial sums.
proof that arranging a certain sequence of numbers within a conditionally convergent sequence can reach infinity: Add up enough numbers to go in the direction of the infinity you want to reach to value V. Once you reach value V, add a negative number if you want to reach +infinity or a positive number if you want to reach -infinity. Then add up enough numbers to surpass value V up to new value V2, and repeat. This is guaranteed to get you to the infinity you wish to repeat because of the following: 1. Removing an item from an infinite set still renders that set size to be infinity because infinity-1=infinity 2. You will be guaranteed to surpass value V at least twice because the sum of the infinte series of positive/negative numbers of conditionally convergent sequences is an infinity. If you remove an element from the sequence, the sum of the sequence is still an infinity because infinity - x = infinity
Very good motivation for and introduction to _Riemann's Rearrangement Theorem_ ! Here are three things that stood out the most: 1.) The idea to represent the positive / negative terms of the sum by "upward" and "downward" movement highlighted by colors/arrows is incredibly intuitive and easy to follow. 2.) It is fascinating the divergence of both the positive and negative part of a conditionally convergent series was mentioned just fleetingly close to the end (16:14) . To me, discovering the divergence of both parts was the biggest realization and _the_ major breakthrough to understanding the theorem, because that is the fundamental difference between absolute and conditionally convergent series. Thank you for presenting a different approach! 3.) Maybe a (very small) technical error: At 08:31 you consider sequences *a_n* that do not tend to zero. While such a sequence _can_ tend to a different limit (as shown in the video following 08:31), it doesn't have to. Counter-example: *a_n := (-1)^m if n = m! for some m ∈ ℕ* *0 else* The sequence above is not a zero-sequence, but it does not converge at all. I'd say the precise condition for sequences *a_n* considered at 08:31ff should be " *a_n* has infinitely many terms with *|a_n| ≥ 𝞮 > 0* ". I understand the precise statement may not translate well to visualization because there are too many cases to consider that are ultimately discarded anyway. If the rest of the video was not so amazingly rigorous, I would not have nitpicked^^
To make the series diverge to infinity, just add enough up arrows to get a length longer than the largest down arrow, add the down arrow, and make a length longer than the next down arrow with your up arrows and add that down arrow, and keep doing this forever. Since it's conditionally convergent I can always have enough terms to make any length I want which will be longer than any down arrow in the sequence. To make it never converge, just make enough arrows go above some threshold (let's say 1), and enough arrows go down afterward to, say, 0. Again, i have enough large enough terms to do this infinitely many times and never converge or diverge.
I know nobody will see this comment on a year old vid, but I feel clever so I'm posting it. My idea for making a sequence that approaches infinity goes as follows: With the arrows arranged by size you take enough new up arrows that in total they exceed the absolute value of the last down arrow (basically tip to tip the ups are longer than last down.) Add all those up arrows and then the next down arrow to the sequence, and just repeat that. Another way to say this is you just need to get a little higher each time you are going up. Unless I'm missing something this would approach infinity, just potentially very slowly.
For the infinite divergent series, what I would do is start by adding the first positive then negative numbers. Next, add positive numbers until the partial sum is greater than the initial positive number by at least 1. I’ll call this partial sum S1 (S0 would be the initial number in the positive sequence). Add the next negative number. Now add positive numbers until the partial sum is greater than S1 by at least 1. This partial sum is S2. Repeat again by adding a negative number then adding positive numbers until it’s greater than S2 by at least 1 and call that sum S3. Etcetera. You now have an infinite sequence of increasing numbers S0, S1, S2... and one could easily see how to do something similar in the negative infinity case. This will not converge because each iteration in the sequence is greater than the last by at least 1 each time meaning the sequence isn’t Cauchy.
Change the number for when you switch, like every time you go up double the number, or every time you switch you switch between two values (or more). In fact you can probably do some fun things by switching the number randomly with different distributions. Like make it so it doesn't diverge to anything. What makes this weird is what happens if you pick a nonconstructable number? I suppose this would imply that there couldn't be a nice rule to pick the next set of arrows.
Infinite: Use the first negative term, then use the positive terms until you've gone above the first negative. Then add the second negative, then add positive until you're above that. Because we're constantly going up, and both sums don't converge, this will get arbitrarily large. Oscillating: add until you get above a certain point, then subtract until you get below a different point. This will never converge, nor will it diverge to infinity.
Its obvius that if you compute the n first elements of these sums, the result will be different since you are grabbing negative numbers from farther away (since you are grabbing 2 negative for each positive one) so the first 100 terms in the first sum has 50 poisitve and 50 negative, but the first 100 terms in the second one has 34 positive and 66 negative. IF you COMPUTED the WHOLE SUM (which is impossible to do since its infinite) the result would be the same (π/4). To sum these sequences you would need a mathematical formula to simplify it, and both sums would give you the same. Oh and haven't seen the video yet so don't blame me pls.
I was first having trouble accepting this, until I realised that we assume that we know the real number we're trying to make our series sum to, to infinitely many decimal digits, or equivalently, we always no precisely whether or not our partial sums are greater, lesser or equal to the number
Study Weierstrass. For any real number, there exists such a rearrangement of the infinite series such that its sum will be equal to this chosen real number. 🦉
As long as you take all infinit 1 over uneven number, the sum is the same but when you take any even finite number of those added and subtracted up, you will have a ratio of 1to1 of subtracted and added numbers for the first one and around 2to1 for the second one.
I'm mesmerized by how intuitive you made the theorem seem. I always felt it was sort of like a "paradox", but your explanation made it look almost plain obvious. Great video!
It's a paradox in the sense that "conditional convergence" isn't really a valid mode of convergence.
It was obvious all the time.
The sign of a great teacher I would say !
@@0x6a09 No it wasn't. You were lucky enough to have a good teacher who made it obvious to you the first time the subject was introduced. *Not everyone is so lucky.*
@@General12th How are my math teachers related to this? They never talked about this. But they talked about limits, and i think this is enough to understand this theorem.
My strategies for the extra problems:
To make the series oscillate, instead of having one target sum, make it two, for example 1 and 0. First take enough positive terms to get the partial sum above 1, then take negative terms to get it below 0, then repeat. This way the series will have infinitely many partial sums both above and below the interval [0,1].
To make the series diverge to infinity, use the same strategy, but make the target sums increase by 1 each time they are reached. I.e. first take positive terms to get above 1, then take negative terms to get below 0, then get above 2, then below 1, then above 3, then below 2, etc. Since the negative terms converge to 0, the sequence of partial sums will have an increasing lower bound that will go to infinity.
Corollary: you can get S from infinitely many rearrangements.Just pick an arbitrarily long (finite) sub-sequence, and the remaining terms still form a conditionally convergent series, which you can rearrange to get a new target sum of S minus the partial sum from the sub-sequence.
I agree with your approach on the non-converging sequence. For the sequence converging to infinity I like your approach, but you can do a little better: whenever you cross your threshold switch to negative numbers until you go below it. Then increase your threshold (you added 1 but you could also multiply by 2, square it, or use any other sequence that will diverges to infinity) and repeat. You will use your negative terms much more slowly but that doesn't matter. As long as they all get used eventually...
Does that work for oscillation, though? The reason the rearrangement works normally is because the positive and negative values approach zero. The deviation of the partial sum from the target sum decreases and eventually converges to zero. To oscillate like that, the deviation is at LEAST one, and never gets smaller.
@@williamrutherford553 Since the sum of the positive terms diverges, even though the terms converge to zero, you can always take enough of them to pass the upper target. Same for negative terms and the lower target.
I think with diverging you can use the successive partial sums of any divergent series as the upper target sums, and any sequence of numbers that's bounded above and below as the difference between pairs of upper and immediately following lower target sums.
There may be more generalizations to be made here; it seems that for the upper and lower target sum series you have to pick two series such that the differences of the corresponding pairs of terms both don't converge to zero and don't diverge to infinity, and that one of the series themselves diverges to infinity (it is obvious that the other one does if these conditions are true)
I like how this is your only video and it's an absolute banger.
It being the only video is, I hope, only a temporary deficiency.
@@morphocular I was here before you blew up!!
Sometimes I get frustrated by how you always state the obvious (you do it very slowly, of course). Then I realise that I would never have come up with what is "obvious" in the middle of the video had you not told me about what is "obvious" previously. And then I just realise that that's how math works! You just state the "obvious" and you come up with more "obvious" statements. This proves how good of an educator you are, great job.
There's a legend that Isaac Newton invented the cat door. A door within a door. So obvious in hindsight anybody could come up with it. But it took the world's greatest genius to actually come up with it. Now, this never happened, but this kind of thing happens all the time. Genius is putting together the novel out of the obvious.
@@angelmendez-rivera351 look at any body's top 5 or 10 list of greatest geniuses of all time l, and Newton will be on every one of them, often at the top. Who else compares? Arhimedes, Einstein, John von Neumann?
The list of geniuses throughout history with a great list of accomplishments is very extensive. However, if they were ordered by the magnitude of their accomplishments then Newton, Gauss, Euler, Einstein, and Von Neumann are at the top of the list (at least among those who lived recently enough for us to be sure of their accomplishments).
@@angelmendez-rivera351 All of them are tied at the very top of the list.
@@angelmendez-rivera351 that list is missing archimedes and Leonardo Da Vinci, Euler was superior to Gauss in terms of achievements, and John von Neumann may have been smarter, but didn't accomplish as much as Fermi or Planke. And Stephen Hawking certainly deserves a spot on the list.
I remember learning about conditional convergent series when taking calculus class and it always felt "why do I care if a series is conditionally vs. absolutely convergent". Your video answered my long-standing question. Thank you!
It makes sense to a degree, the same way that ∞ - ∞ can equal whatever you want
That's an excellent comment!
Using the fact that
∞ + x = ∞
For any real number x, We can get
∞ = ∞ - x
∞ - ∞ = - x
And we can replace -x with y; We can say that y = -x and y belongs in the set of real numbers, now we have
∞ - ∞ = y.
And y can be any real number, because for any y you want to choose, there's an x that will get you that value.
∞−∞ is indeterminate
@@manioqqqq ∞−∞ is straight up undefined, because infinity is not a number
@@SlightSmile Incorrect
This explanation video was on par with 3b1b, if not better. As a very loyal 3b1b viewer, I want to emphasize that it means a lot.
"loyal ... viewer"? What's that supposed to mean? Hahahahahahaha.
fun fact: for the (-1)^n/(2n+1) sequence, the series exactly equals arctanh(p)/2 + pi/4 where p is the proportion of negative and positive terms, ranging from -1 (all negative terms) to 1 (all positive terms)
For example, the ++- pattern converges to arctanh(1/3)/2 + pi/4
I would love to see a proof for why this is the case, because it seems too simple not to have an elegant reason for being that way.
I don’t have a proof, but the Taylor series of arctanh(x) goes like x + x^3/3 + x^5/5 + … which must play a role in it somehow
@@zachteitler9622 wow thanks! you clearly did a lot of work to find all that out
You can also just say that the 1/1+1/3+1/5+1/7+... approaches 1/2+1/4+1/6+1/8+...+a finite constant=
C+ln(n/2)/2, after numbers lower than n.
If half of them are missing, n is half as small, so we are at C+ln(n/4)/3=C+ln(n/2)/2-ln(2)/2.
I instantly came up with this proof when I saw the video title, so this is very natural and simple.
@@caspermadlener4191 no your proof is bullshit
-1 isn't all negative terms, it's just positive terms are very rare and become more rare after more terms and they are 0% of negative terms.
And same for 1
this video made me feel like I knew a lot about the subject by simply explaining the topic really well
A really impressively clear explanation. Thanks a lot for making this!
Thank you for watching! I really appreciate it.
It is amazing how it has already been two years... I remember the time I watched your first video when it was 2 months old.... time flies by.
For the infinity rearrangement try this.
Suppose that the up and down arrows are sorted in terms of decreasing length.What you do is add on the first down arrow. Then add enough up arrows such that the total sum of all the arrows is bigger than 1. Add the second down arrow, then add enough up arrows to make the total sum bigger than 2. Add the third down arrow then add enough up arrows to make the total sum bigger than 3.
Add the 4th down arrow then add enough up arrows to make the total sum bigger than 4. And so on.
❤This is my favorite theorem in the whole calculus, both for sounding so counterintuitive and with so intuitive a proof.
If UA-cam allowed double likes I would have given them to you....that passing comment about L_2 spaces made my day; it makes so much sense - invariance of inner product under space transformations
My idea for one oscillation algorithm:
Sort each subset (up and down arrows) by size, as done in the video, use the arrows in order of decreasing size.
Start at zero (use this as a new minimum), then use the first up arrow to create a new maximum.
From here on, after reaching a new maximum, add down arrows until you exceed the previous global minimum. And every time you reach a new global minimum, switch to adding up arrows until you exceed the previous global maximum.
This will have the series oscillate with increasing amplitude - because every time you go down, you go down to an all-time low, and every time you go up, you go up to an all-time high.
Because the series of up and down arrows each diverge when viewed independently (see video), it will always be possible to add enough arrows to exceed the previous maximum/minimum.
i belive the optimal version might be in np
Would that result in a series that switches between positive and negative infinity? For instance, imagine that each time you switch directions, your new y-coordinate becomes something like n=1-2+3-4+5-6+... The sum, n, keeps switching between increasingly large negative and positive numbers.
I think if you add a restriction like "|n| should always be approximately equal to a constant k", then you can also make it just oscillate between finite numbers. For instance, for k=1, you might oscillate between n=-1 and n=1 in the limit.
Similarly, to rearrange the series to make it blow up to infinity, I think you could just make n monotonically increasing
A minor mistake in the argument at around 8:52 for about why the added terms must converge to 0. It is not because otherwise it must diverge to infinity, but because otherwise it must diverge to infinity OR by oscillating (e.g 1 -1 1 -1… series does not diverge to infinity.)
Yep, you are right. And good catch! I was actually aware of that before, but I decided to leave the argument as is (while gritting my teeth) since it was a sidenote focused on intuition that I wanted to keep brief, and I felt covering that edge case might distract from the main video's thrust.
This was an incredibly lucid explanation. I think I’m actually going to try to teach this to the students in my math-for-art-students course. Before seeing this video, I wouldn’t have dreamed of trying to explain something like this to them, but if I replicate your explanation, I think some (hopefully most) of them might actually get it!
I wish I'd had this video back when I was taking calculus. They never explained *why* to care about conditional vs absolute convergence, so the problems about determining conditional vs absolute convergence just felt like splitting hairs.
dn
Oscillating sequence: add positive terms until the partial sum is greater than 1, then add negative terms until the partial sum is less than 0
Positive infinity: add positive terms until the partial sum is greater than 1, then add the first negative term, then keep adding positive terms until the partial sum is greater than the next integer followed by the next single negative term (negative infinity can be reached by starting with the negative terms)
I don't know what about your channel is different, but you've helped me understand these paradoxical maths things better than anyone else! Both this and fractional derivatives, finally explained understandably
I think the strength of this channel is that the explanations don’t require the viewer to hold many unexplained pieces in place for unknown reasons, waiting for the thing that ties them together.
@@ericpmoss the video on fractional derivatives kind of does require the viewer to do so a little bit. The viewer is referred to another video for some things in the integration part, and the gamma function is not defined.
I think this one might be my fav one so far! Amazing job! the animation you have made it really to follow along and helped me understand the concept a lot!
For divergence to infinity, get the arrows to build a stair case by placing up arrows until they’re over 2 and then down arrows until under 1, then up arrows until over 3 to 2 to 4 etc. All the arrows must be used and it diverges to infinity.
For a divergence by oscillation, pick any two real numbers a,b with a>b (WLOG). Place up arrows until they’re sum is greater than ‘a’ and then place down arrows until the sum is less than ‘b’ and then repeat. The arrows are always enough to get from a to b and back again and oscillate between the two numbers.
What does (WLOG) mean in your comment🤔?
@@christopherrice891 without loss of generality, basically it doesn’t matter which of a and b is bigger because it works in both cases :)
What an explanation!!! Absolutely the best Maths video I've seen this year!
amazing. I felt I watched a 5 minute video since your presentation was smooth and intuitive, great job!!
You must have put a whole lot of work into this and it was so good, thanks a bunch for this!
Great job making counterintuitive and difficult-to-grasp concepts seem natural!
19:40 My take on your problem:
1-So a series diverge by oscilation you will set 2 goal lines, one that need to be transpassed in each phase ( upward and downward ).
2-To diverge to infinity do the exactly same thing as diverging by oscilation, but move each target up or down by some amount on each cicle.
Elegant. The reason this seems counter intuitive is because we forget that two infinities are not always equal. If you pick one pos and one neg number each time, that is a different infinite series than taking two pos and one neg each time. They will not converge to the same number.
Subscribed and watched every video. Great job with the visualizations and distilling the salient features and ideas of the proof into something not just manageable but intuitive, all without any handwaving. Keep going!
How have i never came across you before? Excelent quality of content, far beyond your current 10k subscriber count.
Keep it up!
Wow i'm very impressed of the combination of such pretty visuals together with a very good explaination !
I just found this video a few seconds ago and i must respectfully give you a compliment by saying you have an outstanding sense of humor because i giggled and laughed and rewinded the intro several times. Are you also a comedian?
This is good content. Thank You. The idea of absolute convergence is much more clear to me now.
The production quality for the first video is insane! You totally matched mathologer's video quality on this one. (He did a video on the same topic a few years ago)
Superb video! To summarize, the trick is we have changed the underlying distribution of the negative numbers vs. that of the positive numbers. In other words, how often the often occurs with respect to the other, leading to the different result.
The explanation that got me is that the serie is INF+ (-INF), when you arrange P then N - which can result in any number.
The to infinity construction is fun. You have two types of steps: even steps and odd. At each even step you enumerate arrows to get to n - x where n is the step you're on and x is the next negative number. Then at each odd step you enumerate a negative number. The cleanest construction I can think of.
This has made me think of the axiom of choice and how careful one has to be to be confident of infinite proofs using just cardinality ignoring order, see Cantor and Gödel.
Diverging to infinity:
Consider the length of the largest red arrow, and place a number of green arrows that add up to more than twice that length. Now place the longest red arrow. Continue in the same pattern.
Of course red and green can be switched here to make negative infinity
I think for positive infinity, all you need to do is add green arrows until it’s longer than the next red arrow, not twice as long. It doesn’t need to diverge *fast* it just needs to diverge.
@@titaniadioxide6133 nope you need the factor of 2, since the length of the red arrow decreases but by choosing this factor of 2, you're actually having sums of the lengths of the red arrows as bound, rather than individual lengths themselves
@@titaniadioxide6133 remember that you have to add the red arrows themselves also
@@helloitsme7553 but so long as you are always going farther up than the last arrow took and the next arrow will take you down, won't you always keep going up? Plotting all of your maximum values you might end up with a graph that looks a bit like ln(x) where it nearly flattens out, but so long as it never does you will still approach infinity.
@@GhostGlitch. Always increasing doesn't mean you'll diverge to infinity. For example the sequence 0/1, 1/2, 2/3, 3/4, 4/5, 5/6... is strictly increasing but converges to 1. He mentions this at 14:22 with a geometric series as an example.
I tried to solve your challenge and here’s what I came up with:
To make a series converge to nothing, just add up-arrows until they exceed 1, then add down-arrows until they’re below 0, then add up-arrows again until they reach 1 and so on. (Obviously the numbers don’t have to be 1 and 0).
To blow up to positive infinity, add up-arrows until you hit an integer (k), then add down-arrows until you’re below k-1/2, then add up-arrows until you reach k+1, then down-arrows until you’re below k+1/2 and so on. There’s probably a much simpler solution, but this works fine. Negative infinite is just the opposite strategy.
Very well articulated. Your channel is a great boon to your audience and burgeoning mathematicians the world over.
Positive infinity: let nᵢ be the i-th negative number. Add positive to you get above 2n₁, then add n₁, continue adding postive until you get to above n₁+2n₂, then add n₂. This will create a series that will be higher than of |n₁|+|n₂|+... which diverges. The same method can give you negative infinity.
Swinger: let p > q| -> add postitive til your sum S ≥ p, then add negative until S ≤ q and go back to ->
What strikes me is that while you can prove that there will always be some re-arrangement of a conditionally convergent sequence that will generate any real number you choose, it doesn't mean that finding that precise rearrangement will be that easy in reality, as virtually all real numbers will be generated by a rearrangement of the sequence with no regular pattern, in which you simply need to know an infinite number of terms, rather than be able to infinitely generate them from a pattern (i.e. if you randomly choose a target number to converge on then it is almost certain that the rearrangement solution will not be of the form 'a times up followed by b times down, repeated infinitely many times' or anything of that sort).
Fun fact: using the same construct you can actually find a sub-sequence of the partial sum sequence adjacent to any sequence u(n) you want. If you call S(n) the partial (rearranged) sequence it mean that you can find a rearrangement and a sequence k_n such that S(k_n) - u(n) tends to zero. The sum will therefore approache every term of the sequence, getting better and better every time. Even more stronger, you can control how it tends to zero: if you fix a threshold ε>0, then you can make so that |S(k_n) - u(n)|
Way to approach infinity: rearrange it into many parts, some of each part larger than a positive constant (for positive infinity) or smaller than a negative constant (for positive infinity), so eventually we will get an infinite series of positive and negative terms, where all are larger than or smaller than the chosen constant, so the sum will diverge
Way to approach nothing: choose any two constants, let's call the bigger one k1 and smaller one k2 (i.e. k1>k2), then repeat the process that adds positive terms until sum >k1, then adds negative terms until sum
Oscillation solution: Pick two values, we'll say 1 and 2, then add positive numbers until you pass the greater value (2) then add negative numbers until you pass the lesser value (1). This will oscillate between 1 and 2.
Infinity solution: Add a negative number from the list, then add positive numbers from the list until the current run of positive numbers is greater or equal to double the last negative value, the end result after these two steps will always be at least the absolute value of the sum of the negative values (infinity). Positive and negative may be swapped to get negative infinity.
Here's my ideas:
For +∞: Sum all the negative numbers, then add the positive ones. Vice versa for -∞. (At least I think that would work).
Divergent: First place the largest positive arrow (or number). Next place negative arrows (starting with the largest) such that they are longer than the positive one in total. Then place positive arrows such that they are longer then the previous negative ones together. Repeat for all the numbers.
Where you showed π÷4=atan(1) was absolutely COMIC
Imagine a dancer who takes one step forward followed by one step backward forever. Then rearrange their steps so instead they take two steps forward followed by one step backwards forever. Somehow in my example it is not surprising at all that the dancers end up in different places after you repeat their moves an arbitrary number of times, even though you can rearrange the infinite sequence of moves of one dancer into the other's.
A good point. But then you might find it surprising that this strategy doesn't work on all series. An absolutely convergent series like 1 - 1/4 + 1/9 - etc. consisting of alternating reciprocal squares will not change its sum no matter how you rearrange it. To pursue your analogy, whether rearranging the sequence of steps in a dance affects the long-term position of a dancer depends on the kind of dance they're doing!
that is the thought I had. after 10 000 steps, or as in the video, summing the first 10000 terms, you are no longer summing the same terms. For the infinite summation you could say the infinitely many terms are somewhere in both sequences, but for any intermediate value like adding the first 5 or 10 terms, you are not adding the same terms. Even if you took a google term, it is still very small amount of term compared to the infinitely many terms that exist in the sequence. Convergence is an hypothesis on the answer derived from observing how summing the starting terms behave, if you are not adding the same terms then you won't get the same result.
@@morphocular what would the sum be, if instead of doing + + -, you were to put every negative term first, followed by all the positive term ? I'm not sure you'd even place a single positive term as there are infinitely many negative term in that sequence to be placed first.
I have answered the bonus questions of making conditionally convergent series approach infinity or oscillate.
I could be wrong though.
A.) Approaching an infinity:
Step #1 Keep in mind the magnitude of the next largest negative value. You can have it as a variable (I'll call mine N)
Step #2 Add enough positive values until the net is greater than N
Step #3 Add N
Step #4 Get a new N value
Step #5 Repeat Steps #2-4 for eternity
Because the positive series values diverge, we'll always have enough positive values to reach our goal. Additionally, because we are doing thing forever, every negative value will be used
B.) Oscillate
Step #1 Set up your goal points for oscillation (Mine are 1 and 2)
Step #2 Add positive values until you reach or exceed the greater valued goal point (For me, it's 2)
Step #3 Once that value is obtained, add negative values until lesser valued goal point is reached or exceeded (Now, my value is 2)
Step #4 Repeat Steps #2-3 for eternity
Because both negative and positive terms diverge, there will always be enough to reach both goal points
For A.), you forgot to say that it works for both positive and negative infinity
@TheMoped1000 Yes, thank you for correcting me; however, I am too lazy to edit it. If seeing my comment is that much of an eyesore, you can just "ctrl+shift+i" it away
Wow, that's a great video! Step by step easy to follow, and yet not glossing over any important detail.
My strategies:
Divergence: as soon as it crosses the value S_1, switch to a new value S_2, which it'll "travel" to until it reaches S_2, then when it reaches S_2, switch to a new value, and so on and so forth.
Positive and negative infinity: I have no idea
It's actually pretty simple. If the series converges absolutely, so do the positive and negative subseries. But in a series where all elements have the same sign, rearrangement can't change the sum. Conversely, in a conditionally convergent series, both the positive and the negative subsequence go to infinity. Hence, you can approach any value by just picking negative elements when the partial sum is above the target and positive values when below. You will always pass the target because you have an infinite budget and you will get closer and closer because the elements get arbitrarily small.
I came up with one way to approach infinity. For each down arrow, pop enough up arrow until the magnitude is more than twice that of the down arrows. Since this meta-addition will result in more than the total magnitude of down arrows, the sum would approach positive infinity.
Same can be applied to approach negative infinity.
The crux of this is that conditionally convergent series are just discrete ways to write out infinity - infinity, which you know to be indeterminate, and can take on any value when given a concrete representation. It's not entirely obvious this is the case initially, but once you realize that the conditional convergence implies P+N = finite, but P-N (or P + abs(N)) is infinite, it becomes pretty clear imo
For the end challenges, you could simply set a series of targets to reach, for +inf let's say we aim for 2, 1, 3, 2, 4, 3, 5, 4,... , for -inf simply the negation of all those terms, and for divergence we can do 1, -1, 1, -1,... . In all these cases, there's a total infinite upwards movement and downwards movement, so all the arrows will be used.
My intuition (before watching the whole video) is that the negative components "lag behind" compared to the positive ones
Amazing video, maybe the best maths video I saw on YT
That's actually quite important in quantum mechanics! When switching from Schrödinger's to Dirac's picture of QM, one substitutes the concept of "wavefunctions" with that of "state vectors" living in a Hilbert space that is (for most problems) infinite dimensional. In this case the usual representation of matrices and vectors is not very intuitive, since an infinite-dimensional matrix cannot even be written down. Nevertheless the maths behind it still works in the conventional ways, and the definition of scalar product between vectors of infinitely many components is coherent and most importantly finite. Turns out that the dot product of a state vector with itself is just the infinite sum of the square magnitudes of the Fourier coefficients needed to build the Fourier expansion of the wavefunction in the chosen basis. So yeah, a dot product between infinite dimensional vectors actually represents a converging series, and this fact has physical importance!
So so good! I'd love to see the other half of this, that ordering doesn't matter for absolutely convergent series
That is such a vivid presentation
my take:
convergence to infinity:
Separate the sequence into positive and negative numbers, sorted by value (like in the video). Use positive numbers until you get above 2, then use negative numbers until you get below 1. Then use positive numbers until you get above 3, then switch again to negative numbers until it falls below 2. Once again, use positive numbers to get above 4, and use negative numbers to get below 1. At any step you use the positive numbers to get add ‘+2’ to the sum and then you ‘take away’ 1 with negative numbers. The conditional convergence assumption should make this possible (since both the sequences of positive and negative numbers are divergent).
Example where it doesnt converge:
Same setup as before and as in the video: two sequences of positive and negative numbers, sorted. Use positive numbers to get above 1, then use negative numbers to get below 0, and repeat.
This explanation is so beautiful and awesome
You can get infinite or finite divergent sums by changing the target line to some other function which has a strictly smaller derivative than the lower bounding function of the terms (or the negative of the negative terms). If Aln(n+1) is strictly less than the sum of all positive terms, then Bln(n+1) is a valid target line iff B
man that's amazing illustration
By adding the positive terms more quickly the negative terms, you're causing the positive component to approach infinity more quickly than the negative component. When dealing with limits at infinity, the speed at which infinity is approached also matters, like the limit of f(x)-g(x) where both functions are approaching infinity, but not the same infinity.
A great video, 3lue1brown level quality, really nice!
19:18 That makes sense. Instead of trying to try to approach the line y=S, you can make the arrows approach the line y=Sx. This is just a linear equation which at x=inf has the value y=inf. What x is in this case is the count of how many times the target value line is crossed, not how many arrows are taken into account. Otherwise it would not be possible to stay on the line (I think). In that same way, you can make the arrows approach y=sin(x), to have the series not approach anyting at all.
For a more general proof, replace the constant target number with an arbitrary infinite series: it literally doesn't matter what the series is as long as every term in the series is a real number. Starting with the first number in the target series, apply enough up or down arrows (one or three other; not both) to cross that value; then advance to the next value in the target series and repeat the process.
The difference between your partial sum and the value of that target series will converge to zero, which can also be phrased as your infinite sum converting to the infinite series. Meaning that you can make the infinite sum become infinitely close to the behavior of the target series. So if the target series converges, so will your infinite sum; if it diverges, so will your infinite sum, and in exactly the same way: positive infinity, negative infinity, or forever oscillating.
Or, if you have a more detailed means of measuring how the series behaves in the long run (e.g., hyperreal numbers or intervals), your partial sum can be made to match whatever the measurement of the target series is.
Very well-explained and vividly so.
Awesome presentation, hoping for more videos., Thank you !
Here is my take:
+inf: set the target n=1, sum up enough positive terms to get >1. Now take 1 negative term.
Next, set the target n = max{2, sum+1}, so we will *have to* use at least 1 positive term to reach it, and we will get to at least 2.
Continuing this, we know we use at least 1 positive term and at least 1 negative term each time, and we reach higher numbers.
We also don’t oscillate too much, because negative terms (used only once at each step), go down to 0.
-inf: use n=min{-k, sum-1} with k=1,2,…
No limit:
Just set the goal to +k when summing positive terms, and -k when summing negative terms.
This way we will *have to* use positive and negative terms infinitely often.
And one subsequence (of maxima) approaches +inf, and another (of minima) approaches -inf.
My approach for oscillating, choose S1, S2 in the reals so that S1 =/= S2. Without loss of generality we can choose S1 > S2. Then add up arrows until you pass S1, then add down arrows till you reach S2, then head back to S1, repeat forever. Again we can use the fact that the series is conditionally convergent to know we can always reach the other side.
My approach for infinity, you could switch up and down for negative infinity. Start with a single up arrow, then add a down. From here on add the next N up arrows, where N is the number of term in the series so far, at this point you would add 2. Then a down arrow. Now you are adding the next 5 up terms, then a down term. Now 11 ups 1 down. 23 ups 1 down. 47 up 1 down. As you continue on, you will have more upward motion than downward. Unless the magnitude of the 1 down is larger than the N ups. However as the sum of positive terms approaches infinity (conditionally convergent) You will eventually add so many up terms no down term could compensate, as the down terms will approach 0.
A very good explanation of what seemed impossible. Well done.
I don't feel that it's particularly paradoxical to begin with. A (only slightly naive) definition of an infinite sum is the number that we approach as we take longer and longer partial sums. If you cram two times more positive terms into each sum as in the first example, it'll obviously make all partial sums bigger and since you changed the ordering for the whole infinite series, it hols for the infinite sum as well.
I agree, infinite sum is often defined as limit when number of terms tends to infinity. This means partial sums matter and moving terms makes two partial sums have different terms. So even if each infinite sum have same terms, partial sums do not have same collection of terms.
The sums always added up to the same value for the number of terms shown or upto 1/23 in all the three cases for me. When taken in the original order, when the adjacent terms are interchanged, and also when all the positive terms are taken together and all the negative terms grouped together.
That last challenge is pretty simple: start by adding sufficiently many green arrows such that the distance from the tip of the last one is strictly greater than the length of the longest red arrow. Add the red arrow, then go back to adding green arrows until, again, the distance from the last green tip to the target line is greater than the length of the second-longest red arrow. Rinse and repeat to infinity (and beyooooond!)
I added your video to my private list labeled '' treasure videos ''
And that's enough as a compliment
I'm not through the video but I imagine a good way to intuit the issue would be to take the definition of "trend towards" - you can add infinitely many terms and it will become "arbitrarily close to" the number it trends towards, and then take into account that at a given term in an oscillating series like the first example there's always counterbalancing terms before a given term, if you rearrange it so that those counterbalance terms show up less and less in a series you can make it bigger/smaller. And this is exactly what happened in the example which is why I write this down
This is undoubtedly a nice video and contains a lot of new ideas but I think the real challenge is how can these visual thoughts be turned into a formal proof . This is the real challenge I believe 👍❤
why would you need to? the theorem is already proven, so you don't really have to make up any new proofs based on youtube videos
this is a brilliant way to explain why infinite is not a number coz if it was you would always get the same result no matter how you rearrange
No it's not! The sum changes because terms significantly close to infinity are being discarded and being replaced by terms of the opposite sign. So while the first terms are all maintained, the last terms often get replaced.
@@Laff700 oh yes it is. of course you have to use up all the numbers there are, i.e. lose the pattern in the end - if inifinte was a number. but since infinite is a lose end (and not a number) you can *always* keep the pattern and thus yield different results
@@flatisland No terms at the BEGINNING are being omitted. That much was shown in the video. That doesn't prove that no terms are being omitted anywhere though. It's crucial to remember that not all infinities are of the same magnitude. For example, if a=∞h and b=∞(1-h), then both a and b are infinites. They aren't equal though as b/a=-1+1/h. The original series had an equal amount of positive and negative terms(or at least really close to equal if ∞ is odd). So a=b=∞/2. Lets say we now add 2 positive terms for every negative term(a=2b=2∞/3). Now we have omitted the last ∞/6 terms of negative portion and added ∞/6 new terms to the end of positive portion. Thus we don't have all the same terms as we did in our original series as ∞/6 terms have been changed. I found a general formula for the partial sums of the positive and negative portions and used this theory of mine to get the equation
N=(Pi-Log[(-1+1/h)])/4
where the first ∞h positive terms are used and the first ∞(1-h) negative terms are used. My results agree with what's shown in the video at 2:00. In general, it's best to treat ∞ as an unknown variable which has an extremely high value. This line of reasoning is typically sufficient to coherently use ∞ in math, provided the modulus of ∞ isn't too important. The rules shouldn't all be thrown to the wind whenever an ∞ pops up.
A beautiful entry. Thank you
This strategy can be used to rearrange the terms to approach any desired sequence, e.g. sin(n), ln(n). Once this is proved, the "extra" results are simple consequences of this fact.
@@angelmendez-rivera351 I misspoke slightly. The terms can be selected such that any desired sequence is a subsequence of the sequence of partial sums.
how to rearrange the terms so that the infinite sum diverges into oscillation or infinity
pick 2 values, S_0 and S_1, such that S_0
proof that arranging a certain sequence of numbers within a conditionally convergent sequence can reach infinity:
Add up enough numbers to go in the direction of the infinity you want to reach to value V. Once you reach value V, add a negative number if you want to reach +infinity or a positive number if you want to reach -infinity. Then add up enough numbers to surpass value V up to new value V2, and repeat. This is guaranteed to get you to the infinity you wish to repeat because of the following:
1. Removing an item from an infinite set still renders that set size to be infinity because infinity-1=infinity
2. You will be guaranteed to surpass value V at least twice because the sum of the infinte series of positive/negative numbers of conditionally convergent sequences is an infinity. If you remove an element from the sequence, the sum of the sequence is still an infinity because infinity - x = infinity
Very good motivation for and introduction to _Riemann's Rearrangement Theorem_ ! Here are three things that stood out the most:
1.) The idea to represent the positive / negative terms of the sum by "upward" and "downward" movement highlighted by colors/arrows is incredibly intuitive and easy to follow.
2.) It is fascinating the divergence of both the positive and negative part of a conditionally convergent series was mentioned just fleetingly close to the end (16:14) .
To me, discovering the divergence of both parts was the biggest realization and _the_ major breakthrough to understanding the theorem, because that is the fundamental difference between absolute and conditionally convergent series. Thank you for presenting a different approach!
3.) Maybe a (very small) technical error: At 08:31 you consider sequences *a_n* that do not tend to zero. While such a sequence _can_ tend to a different limit (as shown in the video following 08:31), it doesn't have to. Counter-example:
*a_n := (-1)^m if n = m! for some m ∈ ℕ*
*0 else*
The sequence above is not a zero-sequence, but it does not converge at all. I'd say the precise condition for sequences *a_n* considered at 08:31ff should be " *a_n* has infinitely many terms with *|a_n| ≥ 𝞮 > 0* ".
I understand the precise statement may not translate well to visualization because there are too many cases to consider that are ultimately discarded anyway. If the rest of the video was not so amazingly rigorous, I would not have nitpicked^^
To make the series diverge to infinity, just add enough up arrows to get a length longer than the largest down arrow, add the down arrow, and make a length longer than the next down arrow with your up arrows and add that down arrow, and keep doing this forever. Since it's conditionally convergent I can always have enough terms to make any length I want which will be longer than any down arrow in the sequence.
To make it never converge, just make enough arrows go above some threshold (let's say 1), and enough arrows go down afterward to, say, 0. Again, i have enough large enough terms to do this infinitely many times and never converge or diverge.
I know nobody will see this comment on a year old vid, but I feel clever so I'm posting it. My idea for making a sequence that approaches infinity goes as follows:
With the arrows arranged by size you take enough new up arrows that in total they exceed the absolute value of the last down arrow (basically tip to tip the ups are longer than last down.) Add all those up arrows and then the next down arrow to the sequence, and just repeat that. Another way to say this is you just need to get a little higher each time you are going up.
Unless I'm missing something this would approach infinity, just potentially very slowly.
this got me in stitchs awesome vid!
For the infinite divergent series, what I would do is start by adding the first positive then negative numbers. Next, add positive numbers until the partial sum is greater than the initial positive number by at least 1. I’ll call this partial sum S1 (S0 would be the initial number in the positive sequence). Add the next negative number. Now add positive numbers until the partial sum is greater than S1 by at least 1. This partial sum is S2. Repeat again by adding a negative number then adding positive numbers until it’s greater than S2 by at least 1 and call that sum S3. Etcetera. You now have an infinite sequence of increasing numbers S0, S1, S2... and one could easily see how to do something similar in the negative infinity case. This will not converge because each iteration in the sequence is greater than the last by at least 1 each time meaning the sequence isn’t Cauchy.
Change the number for when you switch, like every time you go up double the number, or every time you switch you switch between two values (or more). In fact you can probably do some fun things by switching the number randomly with different distributions. Like make it so it doesn't diverge to anything. What makes this weird is what happens if you pick a nonconstructable number? I suppose this would imply that there couldn't be a nice rule to pick the next set of arrows.
Infinite: Use the first negative term, then use the positive terms until you've gone above the first negative. Then add the second negative, then add positive until you're above that. Because we're constantly going up, and both sums don't converge, this will get arbitrarily large.
Oscillating: add until you get above a certain point, then subtract until you get below a different point. This will never converge, nor will it diverge to infinity.
Its obvius that if you compute the n first elements of these sums, the result will be different since you are grabbing negative numbers from farther away (since you are grabbing 2 negative for each positive one) so the first 100 terms in the first sum has 50 poisitve and 50 negative, but the first 100 terms in the second one has 34 positive and 66 negative. IF you COMPUTED the WHOLE SUM (which is impossible to do since its infinite) the result would be the same (π/4). To sum these sequences you would need a mathematical formula to simplify it, and both sums would give you the same. Oh and haven't seen the video yet so don't blame me pls.
wow! this is mindblowing!
I was first having trouble accepting this, until I realised that we assume that we know the real number we're trying to make our series sum to, to infinitely many decimal digits, or equivalently, we always no precisely whether or not our partial sums are greater, lesser or equal to the number
I lost brain cells very early but this video is amazing. Never expected something like this to happen.
A series where the terms magnitude approaches some constant diverges, but not necessarily to infinity. For instance the famous sum(x,0,inf,(-1)^x)
Study Weierstrass. For any real number, there exists such a rearrangement of the infinite series such that its sum will be equal to this chosen real number. 🦉
How to oscillate: use two lines instead of one.
As long as you take all infinit 1 over uneven number, the sum is the same but when you take any even finite number of those added and subtracted up, you will have a ratio of 1to1 of subtracted and added numbers for the first one and around 2to1 for the second one.
Stupendous work!