@@markorezic3131 Not necessarily I think. For example f(x) = x. The sum should be undefined since the sum of all positive terms is undefined and the sum of all negative terms is also undefined. You can pair up positive and negative terms to get exactly 0, but that means rearranging the terms which is not always allowed.
@@jweipjklawe4766 It is allowed in this case, you can't arbitrarily rearrange the sums, bc any rearrangement, aside from the very specific point of 0, means you are losing out on something bc of continuity, so it is satisfactory to say when f(-x) = -f(x)
Me, a physics student: Look, I’ve had enough analysis for today, so I’m gonna take a rest and do the assignment tomorrow. *proceeds to watch a video about analysis*
Wow, dude! You finally helped me break through a mental barrier that left me stumped by things like measure theory. I never could really 'get' why it was important, or, more importantly, how it could actually be used or how it works in general. It just seemed like it was shrouded in waaayy too much jargon and 'symbology' (for lack of a better word), without much in the way of intuition or motivation. Thank you! Cheers!
Hey everyone! This video took a bit longer than I hoped to get out the door, but hopefully it won't disappoint. The next video will probably take a while, as I'm expecting it to be a longer one on a very different topic. So in the meantime, I thought I'd share a link to some of my older videos on another channel in case you'd be interested: ua-cam.com/channels/o-H6EyTbD-7inMwW70QdtA.html Most of the videos on that channel were made to supplement undergraduate calculus classes when I was an instructor at Texas A&M University, so the subject matter is accordingly more traditional and narrowly focused. Some videos are lectures made during COVID lockdown, but many others feature animated explanations similar in style to the Morphocular videos here. For a playlist of these more animated videos, take a look at these: ua-cam.com/play/PLjHDjmY5z0plXuUiYepbyIvcgEG48xDxZ.html ua-cam.com/play/PLjHDjmY5z0pn_p5haaVYA-Epp5Hwx1gXO.html Hopefully these videos will be useful and entertaining to some of you, but if you want to stay up to date on future videos, I recommend subscribing to (or otherwise keeping tabs on) this Morphocular channel, as I plan to post pretty much all future content here. Okay, that's all. See you soon(-ish) with a new video!
@@xemnes6494 I realize this is a really late reply, but here are the two guitar songs I used in this video: "Checkmate": ua-cam.com/video/0-Hy0905ZBg/v-deo.html "Orient": ua-cam.com/video/kSEX5wYtCo0/v-deo.html
Dear Morphocular, As an enthusiast, I wondered if this concept of reareanging sums as magnitudes be extended into an R^2 space by means of pixels (infinite or finite resolution space, take your pick) black and white, signifying negative or positive direction, and the magnitude being the amount of actual pixel resolution used up per "scan line," (also potentially infinite, in order to extend our idea into this space). Then, any convergent (or even a subset of irrational, divergent) sums could be reprented by a scan-line "drawing" process, so to speak, where the pixels are arranged specifically like discreet values within some viewport (or local space). Therein, you could rearrange, the order of these any way sequentially from left to right, and the integral of area could remain fixed for a given value chosen, perhaps even simply being defined as the total area of a circle within a well-defined viewing range. If the viewing range shifts, however, the value does too, so the analog is not a 1 to 1 extension. It requires some work, but negative infinity would be represented by black space (or white, depending on established convention) and positive by white space. This would also allow one to approach inifinity by each frame being a dithering of tone into the next view, or a static version of the viewing frame considered 'sufficiently close,' by epsilon delta theorem. I also pondered the thought of applications to these changing of frames themselves leading to new applications of derivatives and possibly fractional derivatives. New fractional derivatives possible here too by comparing the integral/range "frame by frame?" Any thoughts, is this a new extension or a not so new idea?
What if you had some negative terms in the function? I mean, it would feel like if you had a function like sin(x) and tried to sum all of its outputs from 0 to 2pi, it would seem like the output would at least have a reason to be finite since every positive term can be "cancelled out" by a negative term. Maybe it would act like a conditionally convergent series and could either sum to anything or not sum at all depending on what "order" you sum things in? Edit: Changed pi to 2pi because I'm dumb.
The usual way to handle negative values is to do the positive and negative values separately and then add the results. But of course, this only works for absolute convergence. Anything that uses positive/negative cancellation to get a finite value will run into the same problems (to an even greater extent) as conditionally convergent series.
Because the set of summands is uncountable, how are you going to specify this order? (if you can properly order them then this is an enumeration, meaning that the set of summands is actually countable which is a contradiction) Perhaps something to do with the axiom of choice?
@@angelmendez-rivera351 Sure, but the point of the question is what happens when we extend the methods in the video to the continuous case, which will not work as such. (if you order the uncountable set without enumerating it, there will be many subsequences converging to different values)
A consequence of moving from sums to integrals is that L2 must be defined in terms of equivalence classes, i.e. if you take a function f and change a finite amount of points on it (or really any measure 0 amount of points) it is still considered the same function f. This is not an issue for a sigma sum
The point you arrived at at the end of the video reminded me of a video of 3b1b about probability density. The motivation behind using probability density instead of just probability is similar to the motivation of using the integral of f(x)g(x) instead of the plain sum - the individual values would be too big and their sum would just approach infinity (which is not too useful), so you counteract that by multiplying them with dx first
I really like the א & ב numbers but I ❤ large cardinals! I wish there would be more videos about them that explain the connection to axioms and how the strength of an axiomatic system relates to large cardinals and how large cardinals are constructed going way further than all the Powersets etc…
This is also why, in measure theory, discrete (real-valued) random variables are precisely those whose continuous distribution functions (cdfs) have a countable range. Giving it a countable range means there are countably many "jumps" and thus countably many terms in the probability mass function (pmf). If the range is uncountable, you cannot define a pmf for the reasons shown in this video. And (possibly with countably many exceptions) the probability of any individual outcome must be 0.
Or a better way to think about it is that an integral _is_ an uncountable sum . . . where all the terms have been weighted down small enough that the sum can be finite, which means they have to be so small they cannot be understood as real numbers. And the thing that does that? dx. That's _really_ why "dx" is always at the end of an integral. In fact, I advocate we should _not_ write integrals as int_{a...b} f(x) dx but, like sums, we should write the variable of integration with the indices: int_{x=a...b} f(x) dx. This may seem redundant, but the "'dx' indicates the variable of integration" is actually something that won't go far when you get into something like physics, and run into this: Moment of Inertia = int r^2 dm which is most assuredly NOT integration "with respect to mass" in most cases, but rather integration over all _points of the object's volume._ It _really_ is Moment of Inertia = int_{P e R^3} [r(P)]^2 dm(P) where r(P) is the distance to the point P from the rotation axis, and dm(P) is the infinitesimal mass located at point P, which also equals rho(P) dV, where dV is the infinitesimal volume unit and rho(P) is the mass density at P. The former notation is worse than useless; it doesn't even make sense at all!
i'd been thinking about this problem for a week. i came up with the fact that only zero can be a limit point, and there can only be finitely many elements above a given lower bound a, but i couldn't get that last connection, the punchline. kinda frustrating, but still happy to know
Never scene the characterization of summable sets so neatly demonstrated. There's however one question that lingers in my mind: as I understand, this viddo seeks to extend the notion of summability rather than series convergence, which is a stronger condition for sequences, cause it is independent of summation order, and I guess it becomes more so for uncountable families. My question is then if we well-order the family (i.e change the family of indices from R to a transfunite ordinal), can we get a finite sum for sets with the continuum cardinality (or bigger)? Another alternate way of looking at the situation is to examine whether those sums might exist within the field of surreal numbers. Not sure if it's great for UA-cam though.
At 0:40, you say that cardinality doesn't have "day-to-day uses", but it has a lot of applications philosophically and in computer science. There's a simple cardinal argument to show that any automata we've made (finite state, pushdown, Turing) cannot describe all possible sets of strings. The number of strings are uncountable, but the set of automata are countable.
I hadn't heard of that application, but it does sound interesting! Just to be clear, though, all I was trying to say (a bit tongue-in-cheek) in the video was that there probably isn't an application of cardinality that's SO practical that you can directly apply it to solve a mundane, day-to-day problem, and if that's your bar to qualify for practicality, you'll probably be disappointed. (I could be proven wrong, though, which would make me very happy!) It looks like everyone has a slightly different definition for what "practical" means, but the main point I've been wanting to drive home in these past two videos is that cardinality matters and is interesting more than just philosophically. And it sounds like the cardinal argument you mentioned is another good example of that!
All autonoma we've made have finite memory anyway so the fact they can't describe every string is obvious. The whole infinite Turing machine thing is just an approximation, it doesn't describe real machines.
@@IamGrimalkin But it's not about describing every string, it's about describing arbitrary subsets of finite-length strings. We can computationally describe certain infinite subsets of strings with machines, but not others even with finite memory, which I think is non-trivial.
@@Awesome20801 What do you mean by this? Obviously a computer cannot process a string which is larger than its total memory size; and if you have a string with a bounded length and a finite number of characters you only have finitely many arrangements of it. So where are these infinite subsets coming from?
@@IamGrimalkin You can totally process arbitrary length strings with a finite memory machine. One of the more basic automata, a DFA, does so by taking each digit individually, and not storing the entire string anywhere. So, you can have a set-up where someone feeds in a string character by character, and at no point does the computer need to maintain more than a bit of information about state.
@@Anonymous-df8it Pick an orthogonal basis for L2, for example plane waves. Then each function in L2 can be associated with a sequence, where the sequence values are the coefficients of each basis element for that function. Then, the L2 dot product just becomes the l2 dot product on these sequences.
Neat! I like the way that this proves a direct summation of uncountably many positive terms is non-finite, but I'm left with a lingering question: is there some function-equivalent of a Riemann Series? I mean, once negative numbers are allowed, the subset principle no longer holds, so could you define a function with a real number input such that the 'sum' of all the terms is finite, using some naive direct-sum approach? And if so, does the actual method in which you add the values matter for the final sum?
Yeah, if you take x/(x^2+1) and add values going from 0 to infinity and to -infinity at the same speed, pairing up values 1:1 in the right way, your tentative sum will always remain at 0 so the limit of your sum is 0. Hard to formalize though since in order to get anywhere, you have to find a way to pair up uncountably many terms that you want to add
@@iwersonsch5131 Also, IIRC, once you've got a series whose positive components go to infinity and whose negative ones do so as well, you can no longer have it defined to any meaningful value. You can always areange the terms in different ways that make the series converge on a different value
before finishing the video: an integral of a function from a to b is basically an "uncountable sum" of the function, except each term is multiplied by a delta x or dx, so maybe a definition of an uncountable sum of a function could be the definition of an integral without the delta x term at the end
isn't integration just adding uncountable infinite values and even if all the unique values are non zero their sum still approaches a finite value sometimes like. the integral from -inf to + inf of e^(-x^2) which results in a positive number for all real x but the sum of all the y values approaches sqrt(π)??
Take the function that is 1 in the interval from 0 to 1, and 0 everywhere else. The values in the interval from 0 to 1 integrate to 1 because the amount of weight that you add all the 1s over is only 1, rather than being the uncountably infinite weight that you would encounter when trying to sum the values.
So, let's say that f(x)=x, and let's focus only on the region from 0 to 1. If we try adding all the numbers of the form 1/x - not even all the real numbers - we have 1+1/2+1/3+1/4+1/5+..., which is a classic example of an infinite series. So adding all the points together will DEFINITELY be infinite. On the other hand, the integral of f(x)=x from 0 to 1 is 1/2, which is definitely finite. The integral essentially multiplies each point by the distance to the next point, and lets that distance approach zero. This makes the values shrink much quicker.
Dammit, I was thinking of something different! A sum with positive and negative values! I was thinking of constructing it as taking a function f(x), then you sum all the values from 0 to ∞. Essentially performing an infinitely small Riemann sum but without multiplying each term by dx. I was thinking about how the area must be 0 for the sum to be non-infinite, and how f(0) has to either be 0 or an oscillating discontinuity (otherwise the sum equals itself plus f(0), which wouldn't equal the sum). But all hope was lost when I realized as I wrote this that, if you split dx in half, the sum would equal twice itself, ergo the only meaningful sum you could extrapolate from this is one where the sum is 0 (or, trivially, ∞).
Hey, Morphocular! I have a request! 😃 I think this video finally gave me some insight into a question I have been trying to find a good, understandable answer for many years now. But I don't think I have quite enough insight yet to actually do it myself, yet! Maybe it could be a good solid example that you could cover in a video and show how your point here about integrals (and what I assume to be an introduction to measure theory and Lebesgue integrals(?)) could actually be used in real life. The question sounds straightforward, but is actually very difficult. Q: "How would one go about calculating (or approximating numerically) values for the Gamma Function without the aid of a black-box tool like a spreadsheet gamma(x) or gammaln(x), or Wolfram Alpha, or anything like that? And not just for special values like for integers n or (n + 1/2), but for a general rational/real number r. As a specific, concrete example, how would one calculate (more or less 'by hand', if that's even practical) the value of Gamma(4/3)?" Background on why I'm asking this specific question: Everywhere I look there are either a) appeals to an existing black-box function that will return a value for you, without understanding where it came from, b) resorting just to the general properties of the Gamma function (such as Gamma(x + 1) = x * Gamma(x)) and special pre-known values such as Gamma(1/2) being some value involving Pi, I believe, or c) virtually impenetrable mathematical formulas that assume the reader knows way more advanced calculus and advanced theorems that themselves require other advanced theorems to even *understand* what they're *for!* It's very frustrating, because the Gamma function is used everywhere when you start digging down into the fundamentals of many important topics, such as probability theory, Bayesian statistics (and classical stats, too), information theory, machine learning, and a whole host of other areas. But *nowhere* have I found an explanation of how to calculate it that goes from 1) here's the definition and some properties of it, to 2) here's how you can actually calculate it for yourself if you really want to. Is it really that opaque??? Compare the situation with say the exponential or natural logarithm. If you have the patience, you can use the Taylor series to calculate them for whatever input you like. Likewise for most other common functions you can think of, trig functions, even things like the Normal distribution can be approximated in various fairly straightforward ways. Surely someone can open up the black boxes and explain how they work to us non-super-advanced-math folks?!?! Especially for such a fundamentally important function! Anyway, if you have any ideas you could mention or perhaps if you get inspired to even make a video about it, I personally would very much appreciate it, and I suspect I'm not the only person in the world who would! 😅 Cheers!
the normal distribution can be approximated in straightforward ways? idk, seems annoyingly black boxy to me. theres no reason even the most important things wont be hard to compute, unfortunately.
The gamma function is defined as an integral, so you can take your favorite method of approximating integrals like Gaussian quadrature or Riemann sums, or what have you and just do that. Riemann sums in particular don't require any more knowledge of calculus than "an integral is the area underneath the curve." Quadrature is a bit more involved and the coefficients are pretty opaque, but it's not hard to implement. There exist myriad resources for deriving this formula for the gamma function, so the progression can go "The factorial is well-defined for nonnegative integers" -> "I want to extend the factorial to arbitrary real numbers" -> "Look at this integral I found that extends the factorial to arbitrary real numbers (except negative integers)" -> "I have approximated the integral at my favorite value."
you can find an assessment of the gamma function's approximation within a book on asymptotic analysis or perturbation theory. Often the gamma function can be introduced in advanced calculus due to the regular integration however the appearance of special functions within approximations is not necessarily rigorous in application as many results in asymptotic and perturbative analysis often are.
Surely you've at least googled it and saw the definition. And from the definition, it's not that hard to calculate - you can approximate integrals using simple techniques learned from a basic calculus class, and the powers are something you should already know how to do.
What if, insteading of dividing it up by fraction segments, you divided it up by all real numbers less than one? Then you would have an uncountable number of finite points. Also you just baselessly claim that finite sums can't exist based on your proof without acknowledging the earlier assumption you made that the function has to be positive
The Subset Principle only applies to positive numbers. (ik that he said at the beginning that he will only look at positive sums, but before he defined the subset principle, he said that this applies to ANY addition, which is not true)
i mean just use the hyperreal numbers. and rephrase the conclusion as "a finite-valued uncountably infinite sum can have at most countably infinitely many non-infinitesimal terms" (which is equivalent for the reals, as 0 is the only infinitesimal real). and now you have defined an integral exactly as an uncountably infinite sum of infinitesimal terms.
I think this is an excellent video in terms of motivation and education: as you say towards the end, the integral is "obvious" as the continuous analog, but I think many of us take that obviousness for granted since, if you are watching math videos for fun, calculus has already probably become a natural part of your tool belt. Where many pre-measure theory courses would just declare the integral as "correct," this video less justifies the integral and instead highlights why we need some other tool (which is just incidentally the integral) beyond the classic sigma sum is required. Great piece of work! It definitely belongs in classrooms!
what if i took a function like sin(x) I could argue that it sums for all inputs on the range zero to 2pi you could argue that since it is as much over and under the line it sums to zero, then if we wanted a finite sum of other numbers we could make an infinitesimal(single input) hole like at pi/2 we make it equal zero or have no output. then since we know that all inputs canceled originally we could say it sums negative one making it so that we can make any finite sum by making holes in the function that are single inputs. so why do we need to make it have finite outputs over a range like we did at 10:25? I must be very lost BTW just quickly there is a transformation to make a hole in the graph with a specific value. also was there like some limits on the function that it can't be negative and cant equal zero?
About adding up all values of a function: How about a function that is 0 almost everywhere and has countably many points at which it has a positive real value which creates an absolutely convergent sum?
Assuming you want to have both uncountably many positive and negative terms, you would first have to define an order to sum them in. Constructing such an order explicitly may be difficult to say the least
Saying an infinite series has a finite sum is saying that infinity has a stopping point. That is a contradiction to the definition of infinite. They have a limit value. When you alternate the definition of infinity back and forth you can make up anything you want. I especially love when people say 'at infinity' ... makes me a chuckle.
Isn't this obvious though, in that it's kind of the definition of an integral? As soon as I saw the question "Can adding up ALL a function's outputs ever give a finite total?" I immediately thought "isn't that just what an integral is?" An integral just chops up the x-axis into smaller and smaller slices and takes the area of each as the number of slices tends to infinity. Isn't that exactly the same thing as "adding up all the outputs", since we'll have an infinite number of outputs with (effectively) zero width, so we're tending towards just "adding up all the outputs" as the number of slices tends to infinity?
here's an idea: what about infinities greater than the cardinality of the real numbers? How can we visualize those infinities or at least understand them in some way?
I think you'll have to go from giving every value finite weight to giving every value zero weight, and only a density of weight (i.e. weight per area). For most functions I think that'll be the only way to keep your sum from shooting off to infinity
Giving zero weight to terms doesn't violate the (weak) subset principle that only demands general inequality, not strict inequality. This weaker demand, I believe, is common practice in many fields of mathematics because of how usable it makes uncountably infinite sums (albeit under a different name ;)
Of course an uncountable sum can be finite, at least if we allow the use of the Axiom of Choice. Then any uncountable sum consisting of finite non-zero values will always be finite
Couldn't you instead of going to calculus and calling it an integral, instead say that the dot product of two functions is defined as the mean of the terms in the uncountable sum? I feel like that would amount to exactly the same as an integral, just stated a bit differently.
That is basically what the integral does. You can't just use it because it requires defining what the mean of infinitely many values even means - and while it's easy for a countable set (limit of the finite case as you go to infinity), an uncountably infinite one would probably literally be defined as an integral in this context. So there's not much point in reframing it.
Since you *can* chop up the space into countably infinitely finite parts (and that argument shows that you can), it forces the set of nonzero points to be at most countably infinite. If you want to, you can cut it more finely, but doing so can't introduce more elements than there were to start with.
Just try applying the argument in the video as to why these sums don't exist. f(x)=x isn't 0 except for a countably infinite set of inputs, so it doesn't work.
In order for the summation of a functions outputs to be a finite number, you will need a function that is forever decreasing such as 1/x limited in domain from (0,infinity).
What if we assign finite values to divergent sums? Everybody knows that e.g. 0+1+2+3+4+...=-1/12. Can we similarly assign finite values to uncountable sums?
Shrink to zero ? NO! They never get to zero. If they did, then it would not be infinite. "At infinity" or "when we reach infinity" is like saying a positive negative. Diverging to infinity would not be a problem because you could just go to it and be At it or reach it. Your grammar use of infinity is mind boggling.
Dude you wasted 11 minutes of my time just to tell me sums aren't useful for this and we instead need to use an integral??? It's actually a nice argument but I really don't agree with this type of teaching, this type of thing might have helped you personally understand these things but it doesn't really help others understand the main concept, plus you didn't even motivate the need for a continuous dot product. You could reason this intuitively without even mentioning the word "countable" if you stuck to a specific example. I like your other videos though :P looking forward to the followup on this video, I'm sure it will shed some light on some more good insights
suit yourself, I found it very useful and whilst there might be better ways to explain the different concepts themself it’s nice to see how they link and is just quite enjoyable, you clearly have no passion in maths and are just trying to learn the concepts which is understandable but not really worth a negative comment
Yes, quite easily. The sum of all values of the function f(x) = 0 is zero.
Also, according to the video *title*, all functions that satisfy f(-x) = -f(x) have a sum of 0.
@@markorezic3131 Not necessarily I think. For example f(x) = x. The sum should be undefined since the sum of all positive terms is undefined and the sum of all negative terms is also undefined. You can pair up positive and negative terms to get exactly 0, but that means rearranging the terms which is not always allowed.
@@jweipjklawe4766 it's undefined, but you can define it to -1/12 lol
@@jweipjklawe4766 It is allowed in this case, you can't arbitrarily rearrange the sums, bc any rearrangement, aside from the very specific point of 0, means you are losing out on something bc of continuity, so it is satisfactory to say when f(-x) = -f(x)
Snark
Me, a physics student: Look, I’ve had enough analysis for today, so I’m gonna take a rest and do the assignment tomorrow.
*proceeds to watch a video about analysis*
This was a fun question, and nicely motivated through wanting to generalise the dot product. I love your use of animations!
i never have any idea what these videos are talking about but i watch them all the way through every time because *graph*
Just realized that this channel is brand new, and not years old. Great work!
Wow, dude! You finally helped me break through a mental barrier that left me stumped by things like measure theory. I never could really 'get' why it was important, or, more importantly, how it could actually be used or how it works in general. It just seemed like it was shrouded in waaayy too much jargon and 'symbology' (for lack of a better word), without much in the way of intuition or motivation.
Thank you! Cheers!
Hey everyone!
This video took a bit longer than I hoped to get out the door, but hopefully it won't disappoint.
The next video will probably take a while, as I'm expecting it to be a longer one on a very different topic. So in the meantime, I thought I'd share a link to some of my older videos on another channel in case you'd be interested:
ua-cam.com/channels/o-H6EyTbD-7inMwW70QdtA.html
Most of the videos on that channel were made to supplement undergraduate calculus classes when I was an instructor at Texas A&M University, so the subject matter is accordingly more traditional and narrowly focused. Some videos are lectures made during COVID lockdown, but many others feature animated explanations similar in style to the Morphocular videos here. For a playlist of these more animated videos, take a look at these:
ua-cam.com/play/PLjHDjmY5z0plXuUiYepbyIvcgEG48xDxZ.html
ua-cam.com/play/PLjHDjmY5z0pn_p5haaVYA-Epp5Hwx1gXO.html
Hopefully these videos will be useful and entertaining to some of you, but if you want to stay up to date on future videos, I recommend subscribing to (or otherwise keeping tabs on) this Morphocular channel, as I plan to post pretty much all future content here.
Okay, that's all. See you soon(-ish) with a new video!
Thank you!
It's a great channel and videos =)
Also it's very beautiful guitar music playing. Could you provide a link to it? (or names of the tracks)
@@xemnes6494 I realize this is a really late reply, but here are the two guitar songs I used in this video:
"Checkmate": ua-cam.com/video/0-Hy0905ZBg/v-deo.html
"Orient": ua-cam.com/video/kSEX5wYtCo0/v-deo.html
@@morphocular kudos!
Just subbed to the channel, after the wheel videos I just had to
Dear Morphocular,
As an enthusiast, I wondered if this concept of reareanging sums as magnitudes be extended into an R^2 space by means of pixels (infinite or finite resolution space, take your pick) black and white, signifying negative or positive direction, and the magnitude being the amount of actual pixel resolution used up per "scan line," (also potentially infinite, in order to extend our idea into this space). Then, any convergent (or even a subset of irrational, divergent) sums could be reprented by a scan-line "drawing" process, so to speak, where the pixels are arranged specifically like discreet values within some viewport (or local space). Therein, you could rearrange, the order of these any way sequentially from left to right, and the integral of area could remain fixed for a given value chosen, perhaps even simply being defined as the total area of a circle within a well-defined viewing range. If the viewing range shifts, however, the value does too, so the analog is not a 1 to 1 extension. It requires some work, but negative infinity would be represented by black space (or white, depending on established convention) and positive by white space. This would also allow one to approach inifinity by each frame being a dithering of tone into the next view, or a static version of the viewing frame considered 'sufficiently close,' by epsilon delta theorem.
I also pondered the thought of applications to these changing of frames themselves leading to new applications of derivatives and possibly fractional derivatives.
New fractional derivatives possible here too by comparing the integral/range "frame by frame?"
Any thoughts, is this a new extension or a not so new idea?
11:26 YOU SAID IT! YOU SAID THE THING! WOOOOOO
Erm what the sigma?!
Breathtaking proof! I was having this doubt some time ago, I am amazed that there's even a video about it.
What a gold channel I just came across
Thanks for discussing this! It's really cool to walk through why uncountable sums don't work!
What if you had some negative terms in the function? I mean, it would feel like if you had a function like sin(x) and tried to sum all of its outputs from 0 to 2pi, it would seem like the output would at least have a reason to be finite since every positive term can be "cancelled out" by a negative term. Maybe it would act like a conditionally convergent series and could either sum to anything or not sum at all depending on what "order" you sum things in?
Edit: Changed pi to 2pi because I'm dumb.
i thought that too.
Integral of sin(x) from 0 to pi equals to 2
The usual way to handle negative values is to do the positive and negative values separately and then add the results. But of course, this only works for absolute convergence. Anything that uses positive/negative cancellation to get a finite value will run into the same problems (to an even greater extent) as conditionally convergent series.
Because the set of summands is uncountable, how are you going to specify this order?
(if you can properly order them then this is an enumeration, meaning that the set of summands is actually countable which is a contradiction)
Perhaps something to do with the axiom of choice?
@@angelmendez-rivera351 Sure, but the point of the question is what happens when we extend the methods in the video to the continuous case, which will not work as such. (if you order the uncountable set without enumerating it, there will be many subsequences converging to different values)
This is really fascinating! I'm super curious to see what some of the consequences of this whole functions-as-vectors deal is!
A consequence of moving from sums to integrals is that L2 must be defined in terms of equivalence classes, i.e. if you take a function f and change a finite amount of points on it (or really any measure 0 amount of points) it is still considered the same function f.
This is not an issue for a sigma sum
The point you arrived at at the end of the video reminded me of a video of 3b1b about probability density. The motivation behind using probability density instead of just probability is similar to the motivation of using the integral of f(x)g(x) instead of the plain sum - the individual values would be too big and their sum would just approach infinity (which is not too useful), so you counteract that by multiplying them with dx first
I really like the א & ב numbers but I ❤ large cardinals! I wish there would be more videos about them that explain the connection to axioms and how the strength of an axiomatic system relates to large cardinals and how large cardinals are constructed going way further than all the Powersets etc…
This is also why, in measure theory, discrete (real-valued) random variables are precisely those whose continuous distribution functions (cdfs) have a countable range. Giving it a countable range means there are countably many "jumps" and thus countably many terms in the probability mass function (pmf). If the range is uncountable, you cannot define a pmf for the reasons shown in this video. And (possibly with countably many exceptions) the probability of any individual outcome must be 0.
Or a better way to think about it is that an integral _is_ an uncountable sum . . . where all the terms have been weighted down small enough that the sum can be finite, which means they have to be so small they cannot be understood as real numbers. And the thing that does that? dx.
That's _really_ why "dx" is always at the end of an integral. In fact, I advocate we should _not_ write integrals as
int_{a...b} f(x) dx
but, like sums, we should write the variable of integration with the indices:
int_{x=a...b} f(x) dx.
This may seem redundant, but the "'dx' indicates the variable of integration" is actually something that won't go far when you get into something like physics, and run into this:
Moment of Inertia = int r^2 dm
which is most assuredly NOT integration "with respect to mass" in most cases, but rather integration over all _points of the object's volume._ It _really_ is
Moment of Inertia = int_{P e R^3} [r(P)]^2 dm(P)
where r(P) is the distance to the point P from the rotation axis, and dm(P) is the infinitesimal mass located at point P, which also equals rho(P) dV, where dV is the infinitesimal volume unit and rho(P) is the mass density at P. The former notation is worse than useless; it doesn't even make sense at all!
i'd been thinking about this problem for a week. i came up with the fact that only zero can be a limit point, and there can only be finitely many elements above a given lower bound a, but i couldn't get that last connection, the punchline. kinda frustrating, but still happy to know
Never scene the characterization of summable sets so neatly demonstrated. There's however one question that lingers in my mind: as I understand, this viddo seeks to extend the notion of summability rather than series convergence, which is a stronger condition for sequences, cause it is independent of summation order, and I guess it becomes more so for uncountable families. My question is then if we well-order the family (i.e change the family of indices from R to a transfunite ordinal), can we get a finite sum for sets with the continuum cardinality (or bigger)? Another alternate way of looking at the situation is to examine whether those sums might exist within the field of surreal numbers. Not sure if it's great for UA-cam though.
Thanks! this helped clear a bunch of questions.
At 0:40, you say that cardinality doesn't have "day-to-day uses", but it has a lot of applications philosophically and in computer science.
There's a simple cardinal argument to show that any automata we've made (finite state, pushdown, Turing) cannot describe all possible sets of strings. The number of strings are uncountable, but the set of automata are countable.
I hadn't heard of that application, but it does sound interesting!
Just to be clear, though, all I was trying to say (a bit tongue-in-cheek) in the video was that there probably isn't an application of cardinality that's SO practical that you can directly apply it to solve a mundane, day-to-day problem, and if that's your bar to qualify for practicality, you'll probably be disappointed. (I could be proven wrong, though, which would make me very happy!)
It looks like everyone has a slightly different definition for what "practical" means, but the main point I've been wanting to drive home in these past two videos is that cardinality matters and is interesting more than just philosophically. And it sounds like the cardinal argument you mentioned is another good example of that!
All autonoma we've made have finite memory anyway so the fact they can't describe every string is obvious.
The whole infinite Turing machine thing is just an approximation, it doesn't describe real machines.
@@IamGrimalkin But it's not about describing every string, it's about describing arbitrary subsets of finite-length strings. We can computationally describe certain infinite subsets of strings with machines, but not others even with finite memory, which I think is non-trivial.
@@Awesome20801
What do you mean by this?
Obviously a computer cannot process a string which is larger than its total memory size; and if you have a string with a bounded length and a finite number of characters you only have finitely many arrangements of it.
So where are these infinite subsets coming from?
@@IamGrimalkin You can totally process arbitrary length strings with a finite memory machine. One of the more basic automata, a DFA, does so by taking each digit individually, and not storing the entire string anywhere. So, you can have a set-up where someone feeds in a string character by character, and at no point does the computer need to maintain more than a bit of information about state.
Your narration is amazing
Of course, L2 is isomorphic to l2, the space of square summable sequences. So the dot product integral is still just a countable sum in disguise.
???
@@Anonymous-df8it Pick an orthogonal basis for L2, for example plane waves. Then each function in L2 can be associated with a sequence, where the sequence values are the coefficients of each basis element for that function. Then, the L2 dot product just becomes the l2 dot product on these sequences.
huh.
i wonder if it can be generalised to higher-order infinities.
A really interesting topic. Nice presentation.
I enjoy the concept of the uncountable sum !
Neat! I like the way that this proves a direct summation of uncountably many positive terms is non-finite, but I'm left with a lingering question: is there some function-equivalent of a Riemann Series? I mean, once negative numbers are allowed, the subset principle no longer holds, so could you define a function with a real number input such that the 'sum' of all the terms is finite, using some naive direct-sum approach? And if so, does the actual method in which you add the values matter for the final sum?
Yeah, if you take x/(x^2+1) and add values going from 0 to infinity and to -infinity at the same speed, pairing up values 1:1 in the right way, your tentative sum will always remain at 0 so the limit of your sum is 0. Hard to formalize though since in order to get anywhere, you have to find a way to pair up uncountably many terms that you want to add
@@iwersonsch5131 Also, IIRC, once you've got a series whose positive components go to infinity and whose negative ones do so as well, you can no longer have it defined to any meaningful value. You can always areange the terms in different ways that make the series converge on a different value
good video !
nice visuals
and nice conclusion
before finishing the video: an integral of a function from a to b is basically an "uncountable sum" of the function, except each term is multiplied by a delta x or dx, so maybe a definition of an uncountable sum of a function could be the definition of an integral without the delta x term at the end
isn't integration just adding uncountable infinite values and even if all the unique values are non zero their sum still approaches a finite value sometimes like. the integral from -inf to + inf of e^(-x^2) which results in a positive number for all real x but the sum of all the y values approaches sqrt(π)??
lol i spoke too soon, should've finished the video before commenting.
Good I fucking love mathematics I can't believe I'm this lucky to have found something I enjoy this much.
I kind of missed the point where an integral would not shoot to infinity while a sum would. Could someone explain using an example?
Take the function that is 1 in the interval from 0 to 1, and 0 everywhere else. The values in the interval from 0 to 1 integrate to 1 because the amount of weight that you add all the 1s over is only 1, rather than being the uncountably infinite weight that you would encounter when trying to sum the values.
So, let's say that f(x)=x, and let's focus only on the region from 0 to 1. If we try adding all the numbers of the form 1/x - not even all the real numbers - we have 1+1/2+1/3+1/4+1/5+..., which is a classic example of an infinite series. So adding all the points together will DEFINITELY be infinite. On the other hand, the integral of f(x)=x from 0 to 1 is 1/2, which is definitely finite. The integral essentially multiplies each point by the distance to the next point, and lets that distance approach zero. This makes the values shrink much quicker.
Integrals are basically finding the area under a curve in unit squares
Summing an uncountable function is like finding the area by adding lines.
Dammit, I was thinking of something different! A sum with positive and negative values! I was thinking of constructing it as taking a function f(x), then you sum all the values from 0 to ∞. Essentially performing an infinitely small Riemann sum but without multiplying each term by dx. I was thinking about how the area must be 0 for the sum to be non-infinite, and how f(0) has to either be 0 or an oscillating discontinuity (otherwise the sum equals itself plus f(0), which wouldn't equal the sum). But all hope was lost when I realized as I wrote this that, if you split dx in half, the sum would equal twice itself, ergo the only meaningful sum you could extrapolate from this is one where the sum is 0 (or, trivially, ∞).
Hey, Morphocular! I have a request! 😃 I think this video finally gave me some insight into a question I have been trying to find a good, understandable answer for many years now. But I don't think I have quite enough insight yet to actually do it myself, yet! Maybe it could be a good solid example that you could cover in a video and show how your point here about integrals (and what I assume to be an introduction to measure theory and Lebesgue integrals(?)) could actually be used in real life. The question sounds straightforward, but is actually very difficult.
Q: "How would one go about calculating (or approximating numerically) values for the Gamma Function without the aid of a black-box tool like a spreadsheet gamma(x) or gammaln(x), or Wolfram Alpha, or anything like that? And not just for special values like for integers n or (n + 1/2), but for a general rational/real number r. As a specific, concrete example, how would one calculate (more or less 'by hand', if that's even practical) the value of Gamma(4/3)?"
Background on why I'm asking this specific question:
Everywhere I look there are either a) appeals to an existing black-box function that will return a value for you, without understanding where it came from, b) resorting just to the general properties of the Gamma function (such as Gamma(x + 1) = x * Gamma(x)) and special pre-known values such as Gamma(1/2) being some value involving Pi, I believe, or c) virtually impenetrable mathematical formulas that assume the reader knows way more advanced calculus and advanced theorems that themselves require other advanced theorems to even *understand* what they're *for!*
It's very frustrating, because the Gamma function is used everywhere when you start digging down into the fundamentals of many important topics, such as probability theory, Bayesian statistics (and classical stats, too), information theory, machine learning, and a whole host of other areas.
But *nowhere* have I found an explanation of how to calculate it that goes from 1) here's the definition and some properties of it, to 2) here's how you can actually calculate it for yourself if you really want to. Is it really that opaque???
Compare the situation with say the exponential or natural logarithm. If you have the patience, you can use the Taylor series to calculate them for whatever input you like. Likewise for most other common functions you can think of, trig functions, even things like the Normal distribution can be approximated in various fairly straightforward ways.
Surely someone can open up the black boxes and explain how they work to us non-super-advanced-math folks?!?! Especially for such a fundamentally important function!
Anyway, if you have any ideas you could mention or perhaps if you get inspired to even make a video about it, I personally would very much appreciate it, and I suspect I'm not the only person in the world who would! 😅
Cheers!
the normal distribution can be approximated in straightforward ways? idk, seems annoyingly black boxy to me. theres no reason even the most important things wont be hard to compute, unfortunately.
The gamma function is defined as an integral, so you can take your favorite method of approximating integrals like Gaussian quadrature or Riemann sums, or what have you and just do that. Riemann sums in particular don't require any more knowledge of calculus than "an integral is the area underneath the curve." Quadrature is a bit more involved and the coefficients are pretty opaque, but it's not hard to implement.
There exist myriad resources for deriving this formula for the gamma function, so the progression can go "The factorial is well-defined for nonnegative integers" -> "I want to extend the factorial to arbitrary real numbers" -> "Look at this integral I found that extends the factorial to arbitrary real numbers (except negative integers)" -> "I have approximated the integral at my favorite value."
you can find an assessment of the gamma function's approximation within a book on asymptotic analysis or perturbation theory. Often the gamma function can be introduced in advanced calculus due to the regular integration however the appearance of special functions within approximations is not necessarily rigorous in application as many results in asymptotic and perturbative analysis often are.
Surely you've at least googled it and saw the definition.
And from the definition, it's not that hard to calculate - you can approximate integrals using simple techniques learned from a basic calculus class, and the powers are something you should already know how to do.
Loved this video! I'm thinking of taking functional analysis in the future so really enjoyed seeing your excellent preview of it.
What if, insteading of dividing it up by fraction segments, you divided it up by all real numbers less than one? Then you would have an uncountable number of finite points.
Also you just baselessly claim that finite sums can't exist based on your proof without acknowledging the earlier assumption you made that the function has to be positive
I guess you could also get a close approximation of the sum if you ignore the terms less than 1/n for very large n.
The Subset Principle only applies to positive numbers. (ik that he said at the beginning that he will only look at positive sums, but before he defined the subset principle, he said that this applies to ANY addition, which is not true)
i mean just use the hyperreal numbers. and rephrase the conclusion as "a finite-valued uncountably infinite sum can have at most countably infinitely many non-infinitesimal terms" (which is equivalent for the reals, as 0 is the only infinitesimal real). and now you have defined an integral exactly as an uncountably infinite sum of infinitesimal terms.
Did you say the integers are uncountable? I hope not!
I think this is an excellent video in terms of motivation and education: as you say towards the end, the integral is "obvious" as the continuous analog, but I think many of us take that obviousness for granted since, if you are watching math videos for fun, calculus has already probably become a natural part of your tool belt. Where many pre-measure theory courses would just declare the integral as "correct," this video less justifies the integral and instead highlights why we need some other tool (which is just incidentally the integral) beyond the classic sigma sum is required.
Great piece of work! It definitely belongs in classrooms!
what if i took a function like sin(x) I could argue that it sums for all inputs on the range zero to 2pi you could argue that since it is as much over and under the line it sums to zero, then if we wanted a finite sum of other numbers we could make an infinitesimal(single input) hole like at pi/2 we make it equal zero or have no output. then since we know that all inputs canceled originally we could say it sums negative one making it so that we can make any finite sum by making holes in the function that are single inputs. so why do we need to make it have finite outputs over a range like we did at 10:25? I must be very lost BTW just quickly there is a transformation to make a hole in the graph with a specific value. also was there like some limits on the function that it can't be negative and cant equal zero?
What is the music in the background?
About adding up all values of a function: How about a function that is 0 almost everywhere and has countably many points at which it has a positive real value which creates an absolutely convergent sum?
Then its absolutely converging...
Could we just take the sin(x) or cos(x) function?
If it’s not finite, can you add negative terms (an infinite number of them) for a conditionally convergent series? 8:37
Assuming you want to have both uncountably many positive and negative terms, you would first have to define an order to sum them in. Constructing such an order explicitly may be difficult to say the least
Saying an infinite series has a finite sum is saying that infinity has a stopping point. That is a contradiction to the definition of infinite. They have a limit value. When you alternate the definition of infinity back and forth you can make up anything you want. I especially love when people say 'at infinity' ... makes me a chuckle.
Isn't this obvious though, in that it's kind of the definition of an integral? As soon as I saw the question "Can adding up ALL a function's outputs ever give a finite total?" I immediately thought "isn't that just what an integral is?" An integral just chops up the x-axis into smaller and smaller slices and takes the area of each as the number of slices tends to infinity. Isn't that exactly the same thing as "adding up all the outputs", since we'll have an infinite number of outputs with (effectively) zero width, so we're tending towards just "adding up all the outputs" as the number of slices tends to infinity?
No, because the integral normalises the slices, while the sum does not.
here's an idea: what about infinities greater than the cardinality of the real numbers? How can we visualize those infinities or at least understand them in some way?
I think you'll have to go from giving every value finite weight to giving every value zero weight, and only a density of weight (i.e. weight per area). For most functions I think that'll be the only way to keep your sum from shooting off to infinity
Giving zero weight to terms doesn't violate the (weak) subset principle that only demands general inequality, not strict inequality. This weaker demand, I believe, is common practice in many fields of mathematics because of how usable it makes uncountably infinite sums (albeit under a different name ;)
Won't that essentially make it an integral, though? I might be mistaken, but I see no conceptual difference between what you propose and that
@@pedroff_1 Yes, the name is integral.
the integral of f*g is also gonna be infinite, the dx in the end balances the infinity out or smth tho
That doesn't hold in general, it depends on f(x) and g(x)
yes 1-abs(sign(x))
Of course an uncountable sum can be finite, at least if we allow the use of the Axiom of Choice.
Then any uncountable sum consisting of finite non-zero values will always be finite
the video would be way shorter if he mentioned the rule of thumb at the start: sum in discrete case, integral in continuous case.
Couldn't you instead of going to calculus and calling it an integral, instead say that the dot product of two functions is defined as the mean of the terms in the uncountable sum? I feel like that would amount to exactly the same as an integral, just stated a bit differently.
That is basically what the integral does. You can't just use it because it requires defining what the mean of infinitely many values even means - and while it's easy for a countable set (limit of the finite case as you go to infinity), an uncountably infinite one would probably literally be defined as an integral in this context. So there's not much point in reframing it.
f(q)=1/q² where q is irrational
10:00 what about 1/2.5 ? Why has the denominator be a whole number?
Seems uncountable now
Since you *can* chop up the space into countably infinitely finite parts (and that argument shows that you can), it forces the set of nonzero points to be at most countably infinite. If you want to, you can cut it more finely, but doing so can't introduce more elements than there were to start with.
A guess before the video: the function must be zero unless by a countable number of points, and must have to converge to zero fast enough as well
Why wouldn't f(x) = x work?
Just try applying the argument in the video as to why these sums don't exist. f(x)=x isn't 0 except for a countably infinite set of inputs, so it doesn't work.
The sum of any uncountable set of positive numbers is infinity.
In order for the summation of a functions outputs to be a finite number, you will need a function that is forever decreasing such as 1/x limited in domain from (0,infinity).
Epic
Cool
im still left wondering why we use infinity. Nothing in the universe seems to be infinite right?
But math doesn't care about our universe.
There are no infinities in the universe. Whenever it pops up in physics, we know our model is wrong.
I navigate an infinite minefield at least 3 times a day.
What if we assign finite values to divergent sums? Everybody knows that e.g. 0+1+2+3+4+...=-1/12. Can we similarly assign finite values to uncountable sums?
can you please be my math teacher?
yes isn't that what integrals do?
edit:
Lol thats what the video is all about.
Well... An odd function can do!
Not necessarily. Same reason why integral of an odd function over an infinite domain diverges
yes. it's called a sine wave
f(x)
Shrink to zero ? NO! They never get to zero. If they did, then it would not be infinite. "At infinity" or "when we reach infinity" is like saying a positive negative. Diverging to infinity would not be a problem because you could just go to it and be At it or reach it. Your grammar use of infinity is mind boggling.
Dude you wasted 11 minutes of my time just to tell me sums aren't useful for this and we instead need to use an integral??? It's actually a nice argument but I really don't agree with this type of teaching, this type of thing might have helped you personally understand these things but it doesn't really help others understand the main concept, plus you didn't even motivate the need for a continuous dot product. You could reason this intuitively without even mentioning the word "countable" if you stuck to a specific example. I like your other videos though :P looking forward to the followup on this video, I'm sure it will shed some light on some more good insights
I agree with this here - there are much better ways of talking about dot products AND cardinality...
suit yourself, I found it very useful and whilst there might be better ways to explain the different concepts themself it’s nice to see how they link and is just quite enjoyable, you clearly have no passion in maths and are just trying to learn the concepts which is understandable but not really worth a negative comment
@@b1gb017just because this material didn't strike the same way it did for you doesn't mean you should judge someones interested in the subject
Step 1: Choose the piece-wise function f(x)={a if x=b, 0 else; b is real}
Step 2: ∑_{x∈R} (f(x)) = a