I can’t believe that having known the Taylor series of e^x for so long I’ve never realized that it’s an example of an infinite series of vectors that does not converge to a vector. The metric space of polynomials is not complete but a Hilbert space is!
@@tomtomtomtom691 Like the Rumanian guy above said, I think you cant have infinite sums in a vector space without additional structure. Also, given the sum doesn't actually equal e^x, but converges to it. To speak convergence, we need a topology. So we're taking about a Topological Vector Space, at least.
I am delighted to encounter a mathematically literate explanation of modern physical theory that is both elegant and clearly explained. You are a true find, sir.
It's also the entirely wrong explanation. The necessity for Hilbert spaces in physics does not come from the approximation theory of L_2 functions. It comes from Kolmogorov's axioms for ensembles. The creator of this video keeps barking up an entirely wrong tree here.
I love the illustration with the infinite series of polynomials converging to a nonpolynomial which is out of the vector space. I wish I had that type of illustration when I learned about Hilbert space
Except that there is nothing wrong with that limit. It is not physics is all, you'd impose the correct asymptotics to a computation of the amplitudes, so would never get an exponential in QM, except as a mistake or a bad approximation. Imposing correct conservation law asymptotics might still yield a transcendental wave function, it just won't be one blowing up at spatial infinity.
@@Achrononmaster I think you aren't getting the Mathematical concepts being explained / described. The reality of the limit has nothing to do with Physics at all, it stands in its own Mathematical right. As for your assertion that there are no exponentials in QM then that is quite interesting and merits a discussion as to what constitutes a necessary feature of any formulation of Quantum Theory. If GR and QM are ever to get closer then such dogma will have to be addressed. Or are there no exponentials in GR either?
Thank you so much. We were tought the definition of a Hilbert space but were never explained as to why we'd need it. I guess it's inherent within the definition but when you get bombarded with 1000 definitions at once stuff like this completely flies under the radar.
I went to 10 blog posts, and 10 or more videos to understand the Hilbert Space but non of them made sens to me.. and Finally God rewarded me with your Video in suggestion. Now I know what the Hilbert space is and why its essential when we convert classical ML's feature space to QML feature space :D. Thanks for your awesome videos :D
You have opened my eyes! I finished the "Metric Spaces" course in my uni but never really understood how it related to quantum mechanics! Now, it makes sense! Thanks!
Questions: - If an infinite polynomial can converge to an exponent, why can't we just say that the exponent is in fact an infinite polynomial? And the same about any other function, that we can take a Tailor series of. - Our rule we added is exactly the Caushy completeness condition. But the Hilbert space has also the inner product condition, which we seemingly don't need for now. Why do we add it then? Why can't we assume that our space is just Caushy complete, without an extra inner product?
- Functions that can be expressed as a power series (we call these analytic functions) do indeed form a vector space. However, it is a different vector space than that of polynomials. Also note that if we take an infinite sum of exponentials (which are vectors in a vector space of analytic functions), we might end up with a step function, which is not analytic, and in fact not even continuous! Then by taking a potentially infinite sum of step functions, we might end up with a different vector space still. Eventually the space we're working with looses all the nice properties that the original vector space started with. Not to mention that if we allow infinite sums, it's not even obvious what space we're actually working with. - Note that Caushy completeness uses a language of "converging". But what does converging even mean? We need some notion of distance on our vector space in order to even say that two vectors are close together, or that a sequence of vectors is approaching some other vector. Inner product gives us that notion of distance. Note that for R^n we define a length of a vector with a dot product, which is just a special case of inner product.
@@mironhunia300 However, an inner product is much more than a distance function, a metric. It allows you measure the length of vectors (norm) and allows you to talk about projections and orthogonality between vectors.
AMAZING !Thank you so much for the effort of making such content. I'm currently laerning QM all over again and even when I thought I mastered it, I realize that I didn't have these intuitions and I was kind of just brute forcing the math ^^. THANK YOU !
This video is awesome! I've always wanted to understand what the Hilbert Space is and finally I can understand the idea. Thank you so much for creating this series of videos!
I've taken complex analysis, topology and learned a little bit group theory. They're all helpful for me to understand Hilbert vector space. Your videos gave me a chance to connect many pieces of them together. Thank you!
This was a great video and I was so surprised when we came to the definition of Hilbert space and it seemed so clear and fitted together The way you phrased the last sentence 6:56 (about our vector space ‘including the edge’) threw me off, but I think you were saying that the sum of the infinite linear combination lives outside of our vector space (because of those initial rules) and so we do not include it as part of the Hilbert Space. But the one thing I feel I don’t understand is, how does the existence of our boundary inform the way we treat these mathematical operations/objects. It seems we are perfectly fine utilizing infinite polynomial sums -> that themselves converge to polynomials. But what do we do with our e^x infinite linear combination? If we come across any general infinite linear combination that we find out is non-polynomial, do we discard the whole thing and say we don’t want anything to do with that calculation? Or do we say in those situations we force ourselves to only take finite sums, and treat it as a working approximation? I guess I’m asking, as we’ve barred these non-polynomials, what do we do when we encounter them? I can see that we’ve banned them but I don’t know how we go about enforcing our ban.
We are not barring out functions which are not polynomials. We are getting rid of the vector space of polynomials altogether, and replacing it with a better vector space.
1st viewer going to be a long time viewer. See the thing is I've been looking for a channel like yours who explains every detail as if it were a class. Thx & God bless 🙏 you!
What isn't quite satisfying to me is... The exponential is the limit of infinite polynomial terms. So why isn't it a member of the vector space of polynomials? It has a well defined and unique set of coefficients in the x^n basis, so I don't see why it doesn't count beyond just happening to require infinitely many non-zero coefficients. It doesn't quite match the usual conception of a "polynomial," but that seems more like a matter of language than of math. The logarithm is definitely _not_ in the vector space of polynomials since there is no element, even with infinitely many non-zero coefficients, that actually matches the function, but the exponential, along with sine and cosine have no such issue.
Hello! This is a really good question! You say that it seems like a matter of language as opposed to math, and that’s sort of the key. Remember that math is partially the art of precise language; what distinguishes a polynomial from an arbitrary function is how we define a polynomial. From the definition, we can derive several properties that *must* hold true of any polynomial. For example, it’s easy to show that for every polynomial, there exists some integer N such that after taking the Nth derivative, you get zero. e^x satisfies no such property. Likewise, you can prove that any non constant polynomial approaches either + or - infinity as x goes to +- infinity. e^x approaches zero as we go to -infinity, so this property is broken. So yes, in a way it is a matter of language, but we need to be precise with our language in order to be consistent (otherwise, we could say that any function with a Taylor series is a polynomial…which isn’t a very good definition for a polynomial). Hopefully this partially addressed your question! -QuantumSense
@@quantumsensechannel The Nth derivative rule pretty cleanly rules out any polynomial of infinite degree. The divergence rule already has an exception, so it's not really as convincing as a rule rather than just a trend. As I mentioned in my initial comment, allowing infinite polynomials still excludes many functions, namely those with finite radii of convergence, like the aforementioned logarithm. The inverse function also doesn't have a taylor series. _Most_ inverses of polynomials, infinite or otherwise, probably aren't polynomials. Luckily the definition of a vector space doesn't say anything about needing an inverse.
@@angeldude101 I can't post a link in the comments, but search for "Why is the exponential function not in the subspace of all polynomials?" on Math Stackexchange. Your doubt seems to be addressed there. In the video, the analogy of vectors inside the box and their limit being on the edge is relevant. We need to invoke topology at some point to make complete sense of it, as far as I understand.
@@angeldude101 The major problem with allowing "polynomials with infinitely many non-zero coefficients" is that, as you yourself stated, this requires a lot of additional structure: It only really makes sense, when these polynomial approximations converge, which requires a notion of convergence, which in itself requires a metric, which induces a topology and so on. This space can be defined (it is usually called a closure of the underlying set), but it is usually not thought of as the set itself, since polynomials can exist in a space without all this extram machinery. Hope this helps
Based on your analogy of including the boundary condition I initially believed that a Hilbert space is a vector space that also included cases like polynomials approaching e^x. However, based on the written definition I think it might mean that the polynomials don't count as existing in a Hilbert space due to the ability to approach e^x. Is this right? Like, we're saying that Hilbert spaces are a subset of vector spaces where it is impossible to exit the set by approaching the boundary. Because for these vector spaces the way they are defined means the boundary is included. And this inclusion is defined by this rule of Cauchy Completeness on the inner product.
Frankly, I've already spent waaay too much time trying to formulate a precise and accurate answer to your question. I'll cheat a bit saying this: Boundaries are not an easy thing to define, since this requires the notion of a metric on the given space (which I believe is why Hilbert spaces necessarily require an inner product, inducing the metric). Given a metric on a space: If that space is compact, meaning that it is bounded (there is a maximum to its norm) and it is closed (meaning that it contains its boundary) then it is Cauchy closed. But the other way is not necessarily true: A space can be Cauchy complete without being compact. The set of polynomials is not considered Cauchy complete, because there exist sequences of elements of the set (e.g. Taylor Polynomials) which converge to things that are not members of the set themselves (e.g. the exponential function). The functions that can be expressed as Taylor series can therefore be thought of as some sort of "boundary" of the set of polynomials, though I would not be able to show that this forms the complete boundary.
What may help here is the analogy with the rationals. They form a Q-vector space, but they are not Cauchy complete. The Cauchy-completion of the rationals is the Reals, i.e. these fill up the gaps between the rationals. So you could view the irrationals as the boundary of the rationals, but I think it's better to think of these as filling the "holes" in the rationals. In terms of polynomials, the Stone-Weierstrass theorem tells you that every continous function on a closed interval can be uniformly approximated by polynomials. Stated diifferently, the continous functions on a closed interval are the Cauchy-completion of the polynomials, for the sup norm (uniform norm, i.e. uniform convergence). (The sup norm doesn't come from an inner product so we don't have a Hilbert space, but the idea is analogous)
Hi and welcome back! I was looking forward to this and am excited about the rest of the series! So a Hilbert space is more restrictive than a vector space? We only allow spaces that fit the condition, we don't add the infinite products that are not elements to our vector space? (It sounded like "includes the boundary" implies the latter) I sent you an email about a conceptual problem I have with the eigenvectors of the ladder operators or maybe defective operators in Hilbert spaces in general. I hope that wasn't too far out of scope.
Hello, thanks for watching, and being an early supporter! And yes, I would agree in saying that a Hilbert space is more restrictive than a vector space, since we only allow vector spaces that already include their limit points. You can retroactively add the limit points to a vector space, however, in what is known as the "closure" of the set. This changes your vector space though, and is more suited to be understood in the context of topology. And yes, I got your email! I haven't had the time to sit down and think through the answer, but once I do, I will let you know. -QuantumSense
@@narfwhals7843 A Hilbert space is a special type of inner product space. All Hilbert spaces are inner product spaces. But only some inner product spaces are Hilbert spaces.
The more fundamental reason that a simple vector space is not enough is as follows: a finite linear combination of basis vectors can be evaluated with only the operations of addition and scalar multiplication, which is all that the vector space supplies, along with the set of vectors and the field of scalars (which has its own internal addition and multiplication). This is essentially just plain old induction, but intuitively, with just finitely many terms, you only have finitely many operations to perform. You can simply "do all of them" and get a resulting vector. However, an infinite linear combination of vectors does not even exist with just the supplied set, field, and operations. You need a topology, or something similar or stronger (such as a metric) to even define what a limit means (in this case, we need the limit of partial sums). But there is no reason that a field should even have a natural topology, let alone worrying about whether this would allow for convergence in useful situations. Futhermore, we need some way of translating vectors into elements of the field so that we can even use this topology/metric. We often take limits and convergence for granted, because we often work with fields such as R or C which come with very natural and useful topologies and metrics (and hence limits) for free. But in general, a vector space may be over any field - for instance, take the finite fields. In any natural sense, these will be either discrete, indiscrete, or some quotient space between, and all of these are going to be either too strict or too loose for any useful limits; either loads of things will be a limit (or the limit will be trivial), or there will be no limit. As in the video, we first need an inner product to project the large vector space into the small and much neater field, then we need a metric on the output field of this inner product, and only then can we define some sequence to be Cauchy, and hence whether or not every Cauchy sequence converges.
Great videos! I would be interested in more intuition about how we can construct a Cauchy complete vector space. I (basically) get how we can build up any set of "vectors" into a vector space by choosing the right rules, but how do we know that the polynomials + e^x isn't a Hilbert space besides trying the uncountably infinite other infinite series that converge outside the space?
Hello! Thanks for watching the video. The version that I previously uploaded was a teaser for the series. I’ve since finished up the series, so I decided to start fresh with episode one, while fixing a few typos with my previous upload. Thanks for sticking around since my first few uploads! -QuantumSense
The problem is he never defined the metric induced by the inner product, a metric space is complete iff every Cauchy sequence converges( look up these terms).
Here is my (verbose) motivation for completeness in general, as well as how it manifests in terms of the relevant spaces to these videos. The starting point is to consider the rational numbers, which have many "gaps" - for example, sqrt(2) is irrational, but it is a real number. The real numbers fill the gaps that the rationals possess, lending to their pictorial manifestation as an infinite, unbroken line. How do we formalize the fact that the rationals have gaps, but the reals don't? Let's consider the sequence (1, 1.4, 1.41, 1.414,...) of rational numbers which approaches sqrt(2) in decimal expansion. Clearly this can't converge within the rationals - we know sqrt(2) is irrational! Yet its terms are just (some rational)/10^k, and the terms get arbitrarily close to one another: they dance ever-closer to sqrt(2) without ever converging to it as the limit. This phenomenon does not happen in the real numbers; sequences whose terms get arbitrarily close to one another will always have each of its terms converge to a limit, because there are no holes to obstruct this from happening. Let us call a sequence whose terms get arbitrarily close to one another Cauchy, and think of convergent sequences as those whose terms eventually get arbitrarily close to a limit. Note that every convergent sequence is Cauchy, since terms that get close to a certain point will necessarily grow closer to one another. In the rationals, the converse doesn't hold, but in the reals, it does: a space is defined to be complete when every Cauchy sequence within it converges. (In fact, one can construct R from Q this way: you force the Cauchy sequences to converge, and impose an equivalence relation on the sequences that converge to the same value; the collection of these limits modulo that identification comprises the real numbers) Implicit in our discussion is the very notion of distance. The general notion of distance comes down to defining something called a metric (or distance function) on what is a priori a set, which takes as its entries pairs of points in the space, returning a nonnegative real number that we can think of as the distance between said points. It must satisfy three axioms that are more intuitively familiar to us than we may at first realize. First of all, the distance between two points is zero if and only if the two points are actually the same (distance is only zero from a point to itself), positive otherwise. Furthermore, it is symmetric in its arguments: the distance from point x to point y should be the same as that measured from point y to point x. Finally, it satisfies a triangle inequality. Spaces equipped with a metric are referred to as metric spaces: for example, R with its absolute value d(x, y) = |x - y|, which Q inherits (and with respect to which we conducted our first paragraph's discussion). It is with respect to a metric that notions like limits and convergence take root. Now, let's shift our attention to vector spaces, which are the natural domains of definition for quantum-mechanical processes as explained in the first video (and are fundamental in mathematics much more broadly). To start performing analysis on algebraic structures such as these, we would like to have some notion of "size" or "distance" as before. The answer comes in defining a norm ||•|| on a vector space: a function which takes in vectors and returns real numbers as "lengths", satisfying positive-definiteness, homogeneity (pulling out scalar multiples), and an analogous triangle inequality which states that the sum of norms bounds the norm of sums. This natural identification above (and the heuristic that size gives rise to distance) is not in vain: a norm on a vector space yields a metric, defined by d(v, w) = ||v - w||. Now we can start considering analytic ideas like convergence and completeness on normed vector spaces. Not all normed vector spaces are complete, and we're often interested in the ones that are. These are termed Banach spaces, and their study furnishes many interesting and beautiful results. Notable Banach spaces that are not Euclidean (e.g. R^n or C^n) include l^p and L^p of p-summable/integrable functions on a measure space respectively. We want to specialize further. As the next video explains, we are also looking for notions of angle and orthogonality to abstract from R^n to the spaces of functions we'll want to consider. The answer here comes from inner products which in fact always produces a norm (take ||v||^2 = ) and the special kinds of Banach spaces from which the norm comes from an inner product are called Hilbert spaces. It is a result from this theory that every finite-dimensional inner product space actually turns out to be Hilbert (we get completeness "for free"), but we aren't interested in finite-dimensional vector spaces here due to the physical constraints that this video elaborates on. There are incredible theorems that display equivalences between specific Hilbert spaces (actually, the "only" infinite-dimensional Hilbert space is l^2 of square-summable sequences, up to isometric isomorphism) but quantum mechanics is largely concerned with the infinite-dimensional space L^2 of square-integrable functions, which is the only L^p space that carries Hilbert space structure (reading more on these in-depth might involve a bit of background in measure theory). It also lays down the groundwork for Fourier analysis, has interpretations in probability theory, applications in partial differential equations, and much more.
I have a question At 1:25, while talking about how the quantum state is a linear combination of all possible outcome states, he says "We haven't proven that this list of outcome states forms a basis". It doesn't make sense to me why one would have to prove this. Because the fact that a quantum state can be described as a linear state of possible outcome states was defined by him. If the pioneers of quantum mechanics decided that a quantum state can mathematically be described in a vector space as a linear combination of outcome states, then would it not automatically follow that the outcome states would form the basis of this vector space. In short, its like saying that I created the language by which to describe something, but I have to prove the rules of this language.
Nice! Since this is quantum, the best thing is you have the ability to derivate, operate on every point, and since this is differentiable manifolds on those points, flat, don't really have to worry too much about the infinite terms, and in the practice anyway it will have boundaries, those of the system, and renormalisation as always. Heh. The Q M textbook, Griffiths, was like 1000 pages. This will take a while to complete. 🖖
A Hilbert space isn't always (almost never) a differentiable manifold since the homemorphism to R^n is ill-defined for a lot of cases. For example, the Hilbert space L^2(R), which is the usual function space used with position and momentum operators, isn't a differentiable manifold.
@@williammendez5209 this Hilbert space, when you get just a piece, but of course, if you go to infinite, no good. It's the application for quantum. We get those fields, not the whole space, and there are boundaries of course, the most important one is H=T+U (or V) =1, invariant, 100% energy. Pseudoriemmanian manifaltigkeit on each point of the fields, but with QFT we have those deltas, and no way they get to infinity. They have a limit. Ok the rest would be non-linearities... Anyway, we get the trajectory in QFT in configuration space, so a distribution trying to take the whole fields ... 🤔 😬 🖖🤓 So there's basically a little sample of adjacent manifolds, Pseudoriemmanian, which are in this quantum scenario our pieces of fields, all converges inside the same "subspace" any linear combo.
That collecting rocks and apple analogy was hilarious and surprisingly well suited to explain what was going on. But how should I think about a polynomial? It's just a number that keeps on getting bigger from the powers?
I'm not convinced e^x is outside of our vector space. If it can be expressed as an "infinite polynomial," I don't see why we can't call it one. Something that would definitely, to my intuition, be outside our vector space would be 1/x, specifically because it can't at all be approximated by a series of polynomials (as a linear combination of x^n terms for nonnegative integer n) even approaching infinitely many terms. Even if you allow coefficients to change between steps of the series, those coefficients blow up to infinity, and you only get at most one half of the function.
Infinity is a number, since all numbers have a unique, comparable value. The universe uses the speed-of-light as a finite symbol that means infinity. C^2 is a more accurate symbol, etc. Great video, well-made, and educational!
Perhaps what you are trying to say is that countable infinity is a cardinality. It isn't a real number, or integer, or anything similar, because all of these are more strict than simply having a total order, or even a well order (as you say, they are all comparable, which I would interpret as a total order or a well order). Rather, all of these "numbers" have useful binary operations like addition and multiplication. These aren't just arbitrary either; addition is an abelian group operation, and multiplication distributes over addition. Infinity does not naturally fit into these operations, unless you conceed to essentially turn the ring/field into a polynomial ring over infinity (in which case, infinity may aswell be "X"), and the meaning of "infinity" is essentially lost to just being an extra element.
@@stanleydodds9 “Infinity is all numbers simultaneously.” Proof: sin(x) has unlimited domain, yet range is [-1,1]. Therefore, x is any number then sin(x) can also be found using only one period, and therefore infinity is always possible in sin(x) period also. Mapping any finite number onto the circumference is thought possible, though further proof is needed (I can attest). Therefore, given some finite number then infinity is indistinct and possible, using sin(x). Follows that infinity is indistinct with all finite numbers (et blau, above) and is uniquely itself. It is therefore impossible to measure infinity as better one finite number, or another, yet infinity is approximated, understood, a value like other numbers, real though now imagined in specific value. I love infinity, and use it all the time in math and science. “Prove all uniqueness of values: create all numbers. Infinity is all numbers, so the full-set of numbers is exactly infinity long. From 0, infinity is therefore all numbers, each only unique. QEDunum.” This also shows all numbers are ordered by infinity and only. Without infinite elements then some are missing! Nice discussion! Edit: I’m leaving out a discussion of how to multiply and divide using infinity, but the math is clearly seen in a famous physics equation: E=Mc^2. Using the universally constant nature of ‘c’ as a sign it implies infinity (since only infinity means everything, without changing value): multiplication must be “finite = finite * infinity” (leaving off the square for discussion), division must be “finite / finite = infinity”. Take care!
The set of square integrable (Lesbeque) functions is a Hilbert Space. So, exp(x) is not in a Hilbert Space. The set of finite numbers of Polynomials is a Hilbert space. There exist only 1 Hilbert Space. All representations of Hilbert Spaces are isomorphic Quantum States are elements of a Hilbert Space, no matter of what representation.
so, @quantumsensechannel does it mean that a vector space must belong to, or be a hilbert space in quantum mechanics? and , therefore, polynomials cannot be treated as "vectors" or basically quantum states which are represented using the ket ?
I have read that that the Hilbert spaces used in quantum mechanics are separable, meaning that they have a numerable subset that is dense. Why is this necessary? Thank you for the videos
In the given example I don’t understand why the outcome states in the linear combination need to each represent a dimension. Like if they all are cn[En} why is it necessary that [E1} have its own dimension. Linear combinations in LinAlg don’t need their own dimensions right or am I wrong?
No, we resolved the issue by deciding that some vector can only be a quantum state if it lives in a vector space, where adding infinitely many rocks always gives a rock and never apple pie. There still are vector spaces, where adding infinitely many rocks will give apple pie, but the elements of those vector spaces can't be quantum states.
I trully love ur work it is such a brilliant effort mathematical formulation of quantum mecanics is something that only few of People had talked about in UA-cam I just wanted to ask about the polynomials representing a vector space because one of the axioms of vector space is the closure under addition wich is something that polynomials dont satisfy . I am not assuming i am right just wanted to point that out
Hello! Thank you for watching, and the kind words. And to answer your question, polynomials are indeed closed under addition. Adding two polynomials together will always give you another polynomial. Is there some example that seems to be contradictory? Let me know, and I can try and clear it up! -QuantumSense
@@quantumsensechannel thanku for ur response my confusion generated while i was digging in the propreties of vector spaces and i found those affirmations so i decided to ask chat gpt he firt was affirmative about polynomials being vector spaces but then he said the opposit i am sorry is there a way i could send u the screenshots of what he said The main concern is that adding polynomials of the same degree could give u an other polynomyal of a different degree and m not sure if that violates the closure axiom
Hello, Ah, I see. AI may be quite intelligent, but we should not rely on it as the sole source of mathematical understanding! The set we are considering is the set of polynomials with real number coefficients, we make no mention of the degree. So adding two polynomials with real coefficients, will always give you another polynomial with real coefficients, hence closure under addition. ChatGPT may be considering the set of all polynomials of degree n. This, indeed, would NOT be a vector space, since adding two polynomials of degree n could give you a polynomial of different degree (e.g. (1+x^5) + (1-x^5) = 2). So ChatGPT seems to be considering a different set than the one I mention in the video. Let me know if this doesn’t clear it up! -QuantumSense
@@quantumsensechannel i am really greatfull for ur response it cleared al my confusion your videos really helped me and you helped understad this concepts further i honestly hope you succeed i actully started studying qm for curiosity from a young age i had a solid grasp of the abstract concepts but i needed to construct mathematical intuition and your videos are perfect thank you so much 😊
Hey please explain the terms like, Hilbert space is separable and complete in terms of cauchy sequence as usual in many books, this about cauchy complete is really amazing video, and great explanation 👏🏻
A vector space is defined after some "rules", the first of them as illustrated is that the sum of vectors is a vector. If I understood properly, a Hilbert Space is a space where the sum of its elements is not defined in the self space: the (infinite) sum of polynomials is not defined as a polynomial, is that so?
As e^x can be written as the sum of infinite number of polynomials, why can't we consider it inside the field of polynomials? what actually goes wrong?
Polynomials are defined to be finite. Infinite sums are not really sums, but limits of sums, so you need a mathematical structure that allows you to construct limits to define them. Inthe vector space of polynomials we usually don't have that defined in the context of linear algebra, precisely because you can have series of polynomials that result in things that are not polynomials, such as sine, cosine and exponential.
If we start with the assumption that the vector space of polynomials is closed, and then find that an infinitely long polynomial comes to e^x, you say that we should conclude that we made a mistake in assuming an infinitely long polynomial was allowed, that something "went wrong" when we did it. But couldn't we just as easily conclude that e^x is part of the vector space of polynomials?
e^x isn’t a polynomial, so if we mean “the vector space consisting exactly of all the polynomials”, e^x isn’t in it. But we could perhaps take some completion of the space which would include it?
@@badlydrawnturtle8484 Do you mean by redefining “polynomial”, or just concluding that it is one in the existing definition of polynomial? We can’t conclude that because it doesn’t follow. And, it can’t follow, because the conclusion is false: Any non-constant polynomial in one variable has at least one complex root. The e^x function has no roots, and is non-constant. Therefore it is not a polynomial.
@@badlydrawnturtle8484 Do you know the definition of a polynomial? If you know the definition, then you know why we cannot conclude exp(x) is a polynomial. exp(x) is an example of a power series, but not of a polynomial.
In the definition of a vector space we defined the coefficients to be scalars. Shouldn’t the definition of the Hilbert space also include the fact that the coefficients are extended to be complex numbers?
Hello! Thank you for watching. And no not necessarily. A Hilbert space does not necessarily need complex number coefficients. As an example, R2 is a Hilbert space, yet the underlying scalar field is real. So being a Hilbert space says nothing about whether or not your vector space has complex coefficients, although we will see that we need complex numbers in quantum mechanics when we derive the schrodinger equation. -QuantumSense
@@quantumsensechannel THANK YOU VERY MUCH! Your channel is awesome. My regret is that UA-cam allows me to give you only one thumbs up. 👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍
No. What it means is that only some vector spaces are Hilbert spaces. A vector space is not required to have infinite linear combinations. A Hilbert space does.
Zeno's paradox stuff: It's the same as 0.99999 repeating being less that one, or not, and deciding if and when it matters. Mathematician stands at the edge of the 0.9999/1.000 cliff: Which leg is at the bottom of the cliff, and does her head hurt?
@4:10 what? Nothing "went wrong". All numbers are concepts, not just "infinity". In QM you are not interesting in staying within the space of finite polynomials. You want the transcendentals, but you want them having _physical_ boundary conditions, that is, empirically correct asymptotics. That's why an exponential never arises, because (ignoring gravity singularities) physical wave-functions never blow up at spatial infinity. In fact, check out Carl Bender's lectures. Sometimes you even _want_ expressions to blow up to infinity, as series approximations, because then they sum faster (computationally) to physical finite values using asymptotic methods for the analytic continuations on the complex plane. The whole obsession by a few freaks with "finitism" and "discrete" physics is nuts. I'd say "completely nuts" but that'd be wrong, because non-trivial spacetime topology can give us discrete physics --- via the homotopy structure, not the holonomic structure --- even while spacetime is (or can be) still an ideal continuum.
I think calculus on surreal numbers will soon be mature enough to eliminate the problems with infinity in reals and complex numbers defined in the classical way.
Just because e^x isn't a polynomial doesn't mean it's outside of the vector space. e^x is a linear combination of powers of x and so in the x^n basis e^x is a vector in that space.
In that case, the vector space is the space of polynomials. e^x is not a polynomial, therefore, it is outside that vector space. The problem is that that linear combinations are, in essence, finite. That is so because they are defined with respect to adition, which is a binary operation (that is, it takes 2 vectors into 1 vector). Aplying adition repeatedly allows us to make a linear combinations with however many terms we want in it, but it does not allow us to make an infinite linear combination. To define infinite linear combinations, you need to mix Linear Algebra with Calculus, that means you have to be able to define and take limits of sums in vector spaces. This doesn't mean, however, that vector spaces (that are closed under adition) will be closed with respect these limits that we are taking when summing up to infinity.
@@cauebonassi9225 Thank you fir the reply! There's something I'm not understanding at a fundamental level. How is taking an infinitely long linear combination of powers of x fundamentally different that a finite length sum of powers of x? if every power of x represents a basis vector then e^x would require an infinite dimensional space but don't the basic rules of lin alg hold (dot product, norms, etc.)?
I can’t believe that having known the Taylor series of e^x for so long I’ve never realized that it’s an example of an infinite series of vectors that does not converge to a vector. The metric space of polynomials is not complete but a Hilbert space is!
Vx
BRA. I ALMOST FELL ASLEEP AND NOW I CANT CLOSE MY EYES.
Thank you.
But damn you.
No, for real. Cool, never realized that. Have a nice day
@@tomtomtomtom691 Im not aware of polynomials being a metric space. What is the metric?
@@tomtomtomtom691 Like the Rumanian guy above said, I think you cant have infinite sums in a vector space without additional structure. Also, given the sum doesn't actually equal e^x, but converges to it. To speak convergence, we need a topology. So we're taking about a Topological Vector Space, at least.
Wait but e^x is a vector, just not a polynomial...
I am delighted to encounter a mathematically literate explanation of modern physical theory that is both elegant and clearly explained.
You are a true find, sir.
It's also the entirely wrong explanation. The necessity for Hilbert spaces in physics does not come from the approximation theory of L_2 functions. It comes from Kolmogorov's axioms for ensembles. The creator of this video keeps barking up an entirely wrong tree here.
The polinomial example was enlightening for understanding the completeness. This series is great!
I love the illustration with the infinite series of polynomials converging to a nonpolynomial which is out of the vector space. I wish I had that type of illustration when I learned about Hilbert space
Except that there is nothing wrong with that limit. It is not physics is all, you'd impose the correct asymptotics to a computation of the amplitudes, so would never get an exponential in QM, except as a mistake or a bad approximation. Imposing correct conservation law asymptotics might still yield a transcendental wave function, it just won't be one blowing up at spatial infinity.
@@Achrononmaster I think you aren't getting the Mathematical concepts being explained / described. The reality of the limit has nothing to do with Physics at all, it stands in its own Mathematical right. As for your assertion that there are no exponentials in QM then that is quite interesting and merits a discussion as to what constitutes a necessary feature of any formulation of Quantum Theory. If GR and QM are ever to get closer then such dogma will have to be addressed. Or are there no exponentials in GR either?
0:00-Recap
0:57-Dimension of vector space
2:03-Problems with infinite dimensions
5:39-Hilbert spaces and Cauchy completeness
Thank you so much. We were tought the definition of a Hilbert space but were never explained as to why we'd need it. I guess it's inherent within the definition but when you get bombarded with 1000 definitions at once stuff like this completely flies under the radar.
I went to 10 blog posts, and 10 or more videos to understand the Hilbert Space but non of them made sens to me.. and Finally God rewarded me with your Video in suggestion. Now I know what the Hilbert space is and why its essential when we convert classical ML's feature space to QML feature space :D. Thanks for your awesome videos :D
You have opened my eyes! I finished the "Metric Spaces" course in my uni but never really understood how it related to quantum mechanics! Now, it makes sense!
Thanks!
Just discovered your channel. You are an amazing teacher, no exaggeration!
Finally, I understood with a good example, why we need Hilbert space. Thank you.❤
Wow, people love to give daunting names to simple concepts. I love this series
Questions:
- If an infinite polynomial can converge to an exponent, why can't we just say that the exponent is in fact an infinite polynomial? And the same about any other function, that we can take a Tailor series of.
- Our rule we added is exactly the Caushy completeness condition. But the Hilbert space has also the inner product condition, which we seemingly don't need for now. Why do we add it then? Why can't we assume that our space is just Caushy complete, without an extra inner product?
I think the addition of the inner product condition makes the Hilbert space Cauchy complete. We only added one thing (as I understand).
- Functions that can be expressed as a power series (we call these analytic functions) do indeed form a vector space. However, it is a different vector space than that of polynomials. Also note that if we take an infinite sum of exponentials (which are vectors in a vector space of analytic functions), we might end up with a step function, which is not analytic, and in fact not even continuous! Then by taking a potentially infinite sum of step functions, we might end up with a different vector space still. Eventually the space we're working with looses all the nice properties that the original vector space started with. Not to mention that if we allow infinite sums, it's not even obvious what space we're actually working with.
- Note that Caushy completeness uses a language of "converging". But what does converging even mean? We need some notion of distance on our vector space in order to even say that two vectors are close together, or that a sequence of vectors is approaching some other vector. Inner product gives us that notion of distance. Note that for R^n we define a length of a vector with a dot product, which is just a special case of inner product.
@@mironhunia300
However, an inner product is much more than a distance function, a metric. It allows you measure the length of vectors (norm) and allows you to talk about projections and orthogonality between vectors.
AMAZING !Thank you so much for the effort of making such content. I'm currently laerning QM all over again and even when I thought I mastered it, I realize that I didn't have these intuitions and I was kind of just brute forcing the math ^^.
THANK YOU !
This video is awesome! I've always wanted to understand what the Hilbert Space is and finally I can understand the idea. Thank you so much for creating this series of videos!
Wow... what an easy way to understand Hilbert space! Great job!
Good definition on the Hilbert space - something often just accepted in a Quantum course.
I've taken complex analysis, topology and learned a little bit group theory. They're all helpful for me to understand Hilbert vector space. Your videos gave me a chance to connect many pieces of them together. Thank you!
Really, it gives a depth of understanding regarding Hilbert space
This was a great video and I was so surprised when we came to the definition of Hilbert space and it seemed so clear and fitted together
The way you phrased the last sentence 6:56 (about our vector space ‘including the edge’) threw me off, but I think you were saying that the sum of the infinite linear combination lives outside of our vector space (because of those initial rules) and so we do not include it as part of the Hilbert Space.
But the one thing I feel I don’t understand is, how does the existence of our boundary inform the way we treat these mathematical operations/objects. It seems we are perfectly fine utilizing infinite polynomial sums -> that themselves converge to polynomials. But what do we do with our e^x infinite linear combination? If we come across any general infinite linear combination that we find out is non-polynomial, do we discard the whole thing and say we don’t want anything to do with that calculation? Or do we say in those situations we force ourselves to only take finite sums, and treat it as a working approximation? I guess I’m asking, as we’ve barred these non-polynomials, what do we do when we encounter them? I can see that we’ve banned them but I don’t know how we go about enforcing our ban.
We are not barring out functions which are not polynomials. We are getting rid of the vector space of polynomials altogether, and replacing it with a better vector space.
This is the best explanation of this concept which I have ever had the pleasure of learning from. Wow.
Beautifully describes QM in just the right amount of detail. Keep it going👌, very good work
Really good visualisation of Hilbert Space, and why it is needed.
This single video just blew my mind suddenly everything i studied in real analysis and functional analysis just fell into place.
1st viewer going to be a long time viewer. See the thing is I've been looking for a channel like yours who explains every detail as if it were a class. Thx & God bless 🙏 you!
What isn't quite satisfying to me is... The exponential is the limit of infinite polynomial terms. So why isn't it a member of the vector space of polynomials? It has a well defined and unique set of coefficients in the x^n basis, so I don't see why it doesn't count beyond just happening to require infinitely many non-zero coefficients. It doesn't quite match the usual conception of a "polynomial," but that seems more like a matter of language than of math.
The logarithm is definitely _not_ in the vector space of polynomials since there is no element, even with infinitely many non-zero coefficients, that actually matches the function, but the exponential, along with sine and cosine have no such issue.
you are correct. e^x is polynomial.
Hello! This is a really good question!
You say that it seems like a matter of language as opposed to math, and that’s sort of the key. Remember that math is partially the art of precise language; what distinguishes a polynomial from an arbitrary function is how we define a polynomial. From the definition, we can derive several properties that *must* hold true of any polynomial.
For example, it’s easy to show that for every polynomial, there exists some integer N such that after taking the Nth derivative, you get zero. e^x satisfies no such property. Likewise, you can prove that any non constant polynomial approaches either + or - infinity as x goes to +- infinity. e^x approaches zero as we go to -infinity, so this property is broken.
So yes, in a way it is a matter of language, but we need to be precise with our language in order to be consistent (otherwise, we could say that any function with a Taylor series is a polynomial…which isn’t a very good definition for a polynomial).
Hopefully this partially addressed your question!
-QuantumSense
@@quantumsensechannel The Nth derivative rule pretty cleanly rules out any polynomial of infinite degree. The divergence rule already has an exception, so it's not really as convincing as a rule rather than just a trend.
As I mentioned in my initial comment, allowing infinite polynomials still excludes many functions, namely those with finite radii of convergence, like the aforementioned logarithm. The inverse function also doesn't have a taylor series. _Most_ inverses of polynomials, infinite or otherwise, probably aren't polynomials. Luckily the definition of a vector space doesn't say anything about needing an inverse.
@@angeldude101 I can't post a link in the comments, but search for "Why is the exponential function not in the subspace of all polynomials?" on Math Stackexchange. Your doubt seems to be addressed there. In the video, the analogy of vectors inside the box and their limit being on the edge is relevant. We need to invoke topology at some point to make complete sense of it, as far as I understand.
@@angeldude101 The major problem with allowing "polynomials with infinitely many non-zero coefficients" is that, as you yourself stated, this requires a lot of additional structure: It only really makes sense, when these polynomial approximations converge, which requires a notion of convergence, which in itself requires a metric, which induces a topology and so on. This space can be defined (it is usually called a closure of the underlying set), but it is usually not thought of as the set itself, since polynomials can exist in a space without all this extram machinery.
Hope this helps
Excellent intuitive explanation of a Hilbert space
Based on your analogy of including the boundary condition I initially believed that a Hilbert space is a vector space that also included cases like polynomials approaching e^x. However, based on the written definition I think it might mean that the polynomials don't count as existing in a Hilbert space due to the ability to approach e^x. Is this right?
Like, we're saying that Hilbert spaces are a subset of vector spaces where it is impossible to exit the set by approaching the boundary. Because for these vector spaces the way they are defined means the boundary is included. And this inclusion is defined by this rule of Cauchy Completeness on the inner product.
Frankly, I've already spent waaay too much time trying to formulate a precise and accurate answer to your question. I'll cheat a bit saying this: Boundaries are not an easy thing to define, since this requires the notion of a metric on the given space (which I believe is why Hilbert spaces necessarily require an inner product, inducing the metric).
Given a metric on a space: If that space is compact, meaning that it is bounded (there is a maximum to its norm) and it is closed (meaning that it contains its boundary) then it is Cauchy closed. But the other way is not necessarily true: A space can be Cauchy complete without being compact.
The set of polynomials is not considered Cauchy complete, because there exist sequences of elements of the set (e.g. Taylor Polynomials) which converge to things that are not members of the set themselves (e.g. the exponential function). The functions that can be expressed as Taylor series can therefore be thought of as some sort of "boundary" of the set of polynomials, though I would not be able to show that this forms the complete boundary.
What may help here is the analogy with the rationals. They form a Q-vector space, but they are not Cauchy complete. The Cauchy-completion of the rationals is the Reals, i.e. these fill up the gaps between the rationals. So you could view the irrationals as the boundary of the rationals, but I think it's better to think of these as filling the "holes" in the rationals.
In terms of polynomials, the Stone-Weierstrass theorem tells you that every continous function on a closed interval can be uniformly approximated by polynomials. Stated diifferently, the continous functions on a closed interval are the Cauchy-completion of the polynomials, for the sup norm (uniform norm, i.e. uniform convergence). (The sup norm doesn't come from an inner product so we don't have a Hilbert space, but the idea is analogous)
Wish I had this when learning quantum mechanics. Your visualisations and analogies are much more understandable.
Hi and welcome back!
I was looking forward to this and am excited about the rest of the series!
So a Hilbert space is more restrictive than a vector space? We only allow spaces that fit the condition, we don't add the infinite products that are not elements to our vector space? (It sounded like "includes the boundary" implies the latter)
I sent you an email about a conceptual problem I have with the eigenvectors of the ladder operators or maybe defective operators in Hilbert spaces in general. I hope that wasn't too far out of scope.
Hello, thanks for watching, and being an early supporter!
And yes, I would agree in saying that a Hilbert space is more restrictive than a vector space, since we only allow vector spaces that already include their limit points. You can retroactively add the limit points to a vector space, however, in what is known as the "closure" of the set. This changes your vector space though, and is more suited to be understood in the context of topology.
And yes, I got your email! I haven't had the time to sit down and think through the answer, but once I do, I will let you know.
-QuantumSense
@@quantumsensechannel Thank you very much for your reply!
@@narfwhals7843 A Hilbert space is a special type of inner product space. All Hilbert spaces are inner product spaces. But only some inner product spaces are Hilbert spaces.
The more fundamental reason that a simple vector space is not enough is as follows: a finite linear combination of basis vectors can be evaluated with only the operations of addition and scalar multiplication, which is all that the vector space supplies, along with the set of vectors and the field of scalars (which has its own internal addition and multiplication). This is essentially just plain old induction, but intuitively, with just finitely many terms, you only have finitely many operations to perform. You can simply "do all of them" and get a resulting vector.
However, an infinite linear combination of vectors does not even exist with just the supplied set, field, and operations. You need a topology, or something similar or stronger (such as a metric) to even define what a limit means (in this case, we need the limit of partial sums). But there is no reason that a field should even have a natural topology, let alone worrying about whether this would allow for convergence in useful situations. Futhermore, we need some way of translating vectors into elements of the field so that we can even use this topology/metric.
We often take limits and convergence for granted, because we often work with fields such as R or C which come with very natural and useful topologies and metrics (and hence limits) for free. But in general, a vector space may be over any field - for instance, take the finite fields. In any natural sense, these will be either discrete, indiscrete, or some quotient space between, and all of these are going to be either too strict or too loose for any useful limits; either loads of things will be a limit (or the limit will be trivial), or there will be no limit.
As in the video, we first need an inner product to project the large vector space into the small and much neater field, then we need a metric on the output field of this inner product, and only then can we define some sequence to be Cauchy, and hence whether or not every Cauchy sequence converges.
Great videos! I would be interested in more intuition about how we can construct a Cauchy complete vector space. I (basically) get how we can build up any set of "vectors" into a vector space by choosing the right rules, but how do we know that the polynomials + e^x isn't a Hilbert space besides trying the uncountably infinite other infinite series that converge outside the space?
That isn't something that can be explained with a single video.
I also finished with that question in mind but couldn't find the answer on Google
Did you reupload this? I like this version and your narrative very much
Hello! Thanks for watching the video.
The version that I previously uploaded was a teaser for the series. I’ve since finished up the series, so I decided to start fresh with episode one, while fixing a few typos with my previous upload.
Thanks for sticking around since my first few uploads!
-QuantumSense
incredible work! You deserve so many more subs
Cauchy completeness is something I never understood fully.
I just got into college and while watching this I was like “who’s Cauchy”
*completely
why, my Dear?..🍷😊
The problem is he never defined the metric induced by the inner product, a metric space is complete iff every Cauchy sequence converges( look up these terms).
Here is my (verbose) motivation for completeness in general, as well as how it manifests in terms of the relevant spaces to these videos.
The starting point is to consider the rational numbers, which have many "gaps" - for example, sqrt(2) is irrational, but it is a real number. The real numbers fill the gaps that the rationals possess, lending to their pictorial manifestation as an infinite, unbroken line. How do we formalize the fact that the rationals have gaps, but the reals don't?
Let's consider the sequence (1, 1.4, 1.41, 1.414,...) of rational numbers which approaches sqrt(2) in decimal expansion. Clearly this can't converge within the rationals - we know sqrt(2) is irrational! Yet its terms are just (some rational)/10^k, and the terms get arbitrarily close to one another: they dance ever-closer to sqrt(2) without ever converging to it as the limit. This phenomenon does not happen in the real numbers; sequences whose terms get arbitrarily close to one another will always have each of its terms converge to a limit, because there are no holes to obstruct this from happening.
Let us call a sequence whose terms get arbitrarily close to one another Cauchy, and think of convergent sequences as those whose terms eventually get arbitrarily close to a limit. Note that every convergent sequence is Cauchy, since terms that get close to a certain point will necessarily grow closer to one another. In the rationals, the converse doesn't hold, but in the reals, it does: a space is defined to be complete when every Cauchy sequence within it converges. (In fact, one can construct R from Q this way: you force the Cauchy sequences to converge, and impose an equivalence relation on the sequences that converge to the same value; the collection of these limits modulo that identification comprises the real numbers)
Implicit in our discussion is the very notion of distance. The general notion of distance comes down to defining something called a metric (or distance function) on what is a priori a set, which takes as its entries pairs of points in the space, returning a nonnegative real number that we can think of as the distance between said points. It must satisfy three axioms that are more intuitively familiar to us than we may at first realize. First of all, the distance between two points is zero if and only if the two points are actually the same (distance is only zero from a point to itself), positive otherwise. Furthermore, it is symmetric in its arguments: the distance from point x to point y should be the same as that measured from point y to point x. Finally, it satisfies a triangle inequality. Spaces equipped with a metric are referred to as metric spaces: for example, R with its absolute value d(x, y) = |x - y|, which Q inherits (and with respect to which we conducted our first paragraph's discussion). It is with respect to a metric that notions like limits and convergence take root.
Now, let's shift our attention to vector spaces, which are the natural domains of definition for quantum-mechanical processes as explained in the first video (and are fundamental in mathematics much more broadly). To start performing analysis on algebraic structures such as these, we would like to have some notion of "size" or "distance" as before. The answer comes in defining a norm ||•|| on a vector space: a function which takes in vectors and returns real numbers as "lengths", satisfying positive-definiteness, homogeneity (pulling out scalar multiples), and an analogous triangle inequality which states that the sum of norms bounds the norm of sums.
This natural identification above (and the heuristic that size gives rise to distance) is not in vain: a norm on a vector space yields a metric, defined by d(v, w) = ||v - w||. Now we can start considering analytic ideas like convergence and completeness on normed vector spaces.
Not all normed vector spaces are complete, and we're often interested in the ones that are. These are termed Banach spaces, and their study furnishes many interesting and beautiful results. Notable Banach spaces that are not Euclidean (e.g. R^n or C^n) include l^p and L^p of p-summable/integrable functions on a measure space respectively.
We want to specialize further. As the next video explains, we are also looking for notions of angle and orthogonality to abstract from R^n to the spaces of functions we'll want to consider. The answer here comes from inner products which in fact always produces a norm (take ||v||^2 = ) and the special kinds of Banach spaces from which the norm comes from an inner product are called Hilbert spaces.
It is a result from this theory that every finite-dimensional inner product space actually turns out to be Hilbert (we get completeness "for free"), but we aren't interested in finite-dimensional vector spaces here due to the physical constraints that this video elaborates on. There are incredible theorems that display equivalences between specific Hilbert spaces (actually, the "only" infinite-dimensional Hilbert space is l^2 of square-summable sequences, up to isometric isomorphism) but quantum mechanics is largely concerned with the infinite-dimensional space L^2 of square-integrable functions, which is the only L^p space that carries Hilbert space structure (reading more on these in-depth might involve a bit of background in measure theory). It also lays down the groundwork for Fourier analysis, has interpretations in probability theory, applications in partial differential equations, and much more.
Awesome! Thank you soon much that is exactly what I was looking for! Finally I get a deeper understanding! 🎉
Love the graph illustrations! Love the apple pie analogy! ❤😂
It's probably the clearest intro on the math for QM and Hilbert space.
Holy damn! why have youtube never recomended me this before!!! AMAZING VIDEOS!
Why is e^x considered outside of the vector space?
It satisfies all the aforementioned conditions too, just like polynomials.
I have a question
At 1:25, while talking about how the quantum state is a linear combination of all possible outcome states, he says "We haven't proven that this list of outcome states forms a basis".
It doesn't make sense to me why one would have to prove this. Because the fact that a quantum state can be described as a linear state of possible outcome states was defined by him.
If the pioneers of quantum mechanics decided that a quantum state can mathematically be described in a vector space as a linear combination of outcome states, then would it not automatically follow that the outcome states would form the basis of this vector space.
In short, its like saying that I created the language by which to describe something, but I have to prove the rules of this language.
Great explanation! Waiting for more in the series.
Uauuu, i love the explanation. Very good to understand well what each thing is for. Congratulations on your work!!
Thank you very much!
It was hard enough for me to understand the Cauchy condition until I saw your video, thanks, bro
thank you so much God bless you....waiting for the next chapter........
Nice!
Since this is quantum, the best thing is you have the ability to derivate, operate on every point, and since this is differentiable manifolds on those points, flat, don't really have to worry too much about the infinite terms, and in the practice anyway it will have boundaries, those of the system, and renormalisation as always.
Heh. The Q M textbook, Griffiths, was like 1000 pages. This will take a while to complete. 🖖
A Hilbert space isn't always (almost never) a differentiable manifold since the homemorphism to R^n is ill-defined for a lot of cases. For example, the Hilbert space L^2(R), which is the usual function space used with position and momentum operators, isn't a differentiable manifold.
@@williammendez5209 this Hilbert space, when you get just a piece, but of course, if you go to infinite, no good.
It's the application for quantum. We get those fields, not the whole space, and there are boundaries of course, the most important one is H=T+U (or V) =1, invariant, 100% energy.
Pseudoriemmanian manifaltigkeit on each point of the fields, but with QFT we have those deltas, and no way they get to infinity. They have a limit. Ok the rest would be non-linearities...
Anyway, we get the trajectory in QFT in configuration space, so a distribution trying to take the whole fields
... 🤔 😬 🖖🤓
So there's basically a little sample of adjacent manifolds, Pseudoriemmanian, which are in this quantum scenario our pieces of fields, all converges inside the same "subspace" any linear combo.
Very illustrative and funny. Congrats!
This is such an excellent series!
Отличная идея и реализация, продолжайте в том же духе!
That collecting rocks and apple analogy was hilarious and surprisingly well suited to explain what was going on. But how should I think about a polynomial? It's just a number that keeps on getting bigger from the powers?
but if you have an inf dim polynomial vector space the taylor rep of e^x is still a member of that inf dim polynomial vector space??
I'm not convinced e^x is outside of our vector space. If it can be expressed as an "infinite polynomial," I don't see why we can't call it one. Something that would definitely, to my intuition, be outside our vector space would be 1/x, specifically because it can't at all be approximated by a series of polynomials (as a linear combination of x^n terms for nonnegative integer n) even approaching infinitely many terms. Even if you allow coefficients to change between steps of the series, those coefficients blow up to infinity, and you only get at most one half of the function.
Infinity is a number, since all numbers have a unique, comparable value. The universe uses the speed-of-light as a finite symbol that means infinity. C^2 is a more accurate symbol, etc. Great video, well-made, and educational!
Perhaps what you are trying to say is that countable infinity is a cardinality. It isn't a real number, or integer, or anything similar, because all of these are more strict than simply having a total order, or even a well order (as you say, they are all comparable, which I would interpret as a total order or a well order).
Rather, all of these "numbers" have useful binary operations like addition and multiplication. These aren't just arbitrary either; addition is an abelian group operation, and multiplication distributes over addition. Infinity does not naturally fit into these operations, unless you conceed to essentially turn the ring/field into a polynomial ring over infinity (in which case, infinity may aswell be "X"), and the meaning of "infinity" is essentially lost to just being an extra element.
@@stanleydodds9 “Infinity is all numbers simultaneously.” Proof: sin(x) has unlimited domain, yet range is [-1,1]. Therefore, x is any number then sin(x) can also be found using only one period, and therefore infinity is always possible in sin(x) period also. Mapping any finite number onto the circumference is thought possible, though further proof is needed (I can attest). Therefore, given some finite number then infinity is indistinct and possible, using sin(x). Follows that infinity is indistinct with all finite numbers (et blau, above) and is uniquely itself.
It is therefore impossible to measure infinity as better one finite number, or another, yet infinity is approximated, understood, a value like other numbers, real though now imagined in specific value. I love infinity, and use it all the time in math and science. “Prove all uniqueness of values: create all numbers. Infinity is all numbers, so the full-set of numbers is exactly infinity long. From 0, infinity is therefore all numbers, each only unique. QEDunum.” This also shows all numbers are ordered by infinity and only. Without infinite elements then some are missing! Nice discussion!
Edit: I’m leaving out a discussion of how to multiply and divide using infinity, but the math is clearly seen in a famous physics equation: E=Mc^2. Using the universally constant nature of ‘c’ as a sign it implies infinity (since only infinity means everything, without changing value): multiplication must be “finite = finite * infinity” (leaving off the square for discussion), division must be “finite / finite = infinity”. Take care!
The set of square integrable
(Lesbeque) functions is a Hilbert Space.
So, exp(x) is not in a Hilbert Space.
The set of finite numbers of Polynomials is a Hilbert space.
There exist only 1 Hilbert Space.
All representations of Hilbert Spaces are isomorphic
Quantum States are elements of a Hilbert Space, no matter of what representation.
Just Beautiful.
so, @quantumsensechannel does it mean that a vector space must belong to, or be a hilbert space in quantum mechanics? and , therefore, polynomials cannot be treated as "vectors" or basically quantum states which are represented using the ket ?
I have read that that the Hilbert spaces used in quantum mechanics are separable, meaning that they have a numerable subset that is dense. Why is this necessary? Thank you for the videos
Also, I would like to know why these spaces must be projective. Thank you in advance
In the given example I don’t understand why the outcome states in the linear combination need to each represent a dimension. Like if they all are cn[En} why is it necessary that [E1} have its own dimension. Linear combinations in LinAlg don’t need their own dimensions right or am I wrong?
No, thank YOU for explaining it to us. Thanks a lot, brother.
you're life saver San!
So… we resolved the issue of apple pie by just deciding to expand the definition of rocks to include apple pie? 🤔
No, we resolved the issue by deciding that some vector can only be a quantum state if it lives in a vector space, where adding infinitely many rocks always gives a rock and never apple pie. There still are vector spaces, where adding infinitely many rocks will give apple pie, but the elements of those vector spaces can't be quantum states.
Wow! Nice explanation.🙌
What a great explanation!
I trully love ur work it is such a brilliant effort mathematical formulation of quantum mecanics is something that only few of People had talked about in UA-cam
I just wanted to ask about the polynomials representing a vector space because one of the axioms of vector space is the closure under addition wich is something that polynomials dont satisfy . I am not assuming i am right just wanted to point that out
Hello! Thank you for watching, and the kind words.
And to answer your question, polynomials are indeed closed under addition. Adding two polynomials together will always give you another polynomial. Is there some example that seems to be contradictory? Let me know, and I can try and clear it up!
-QuantumSense
@@quantumsensechannel thanku for ur response my confusion generated while i was digging in the propreties of vector spaces and i found those affirmations so i decided to ask chat gpt he firt was affirmative about polynomials being vector spaces but then he said the opposit i am sorry is there a way i could send u the screenshots of what he said The main concern is that adding polynomials of the same degree could give u an other polynomyal of a different degree and m not sure if that violates the closure axiom
Hello,
Ah, I see. AI may be quite intelligent, but we should not rely on it as the sole source of mathematical understanding! The set we are considering is the set of polynomials with real number coefficients, we make no mention of the degree. So adding two polynomials with real coefficients, will always give you another polynomial with real coefficients, hence closure under addition.
ChatGPT may be considering the set of all polynomials of degree n. This, indeed, would NOT be a vector space, since adding two polynomials of degree n could give you a polynomial of different degree (e.g. (1+x^5) + (1-x^5) = 2). So ChatGPT seems to be considering a different set than the one I mention in the video.
Let me know if this doesn’t clear it up!
-QuantumSense
@@quantumsensechannel i am really greatfull for ur response it cleared al my confusion your videos really helped me and you helped understad this concepts further i honestly hope you succeed i actully started studying qm for curiosity from a young age i had a solid grasp of the abstract concepts but i needed to construct mathematical intuition and your videos are perfect thank you so much 😊
Excellent! Now I gets it!
Hey please explain the terms like, Hilbert space is separable and complete in terms of cauchy sequence as usual in many books, this about cauchy complete is really amazing video, and great explanation 👏🏻
So elegant. An amazing peoce of work 👍👍
Excellently taught!
Loving the math of QM. tysm
A vector space is defined after some "rules", the first of them as illustrated is that the sum of vectors is a vector. If I understood properly, a Hilbert Space is a space where the sum of its elements is not defined in the self space: the (infinite) sum of polynomials is not defined as a polynomial, is that so?
a really good intuition, thanks
So, to stay inside Hilbert space, you can't have large scale physical things that goes to infinite like the universe that we live in?
What is the name of the audio at the first 20 seconds? I love your work, the illustrations are very beutiful.
Hello, thank you for the kind words!
Any music I use will always be listed in the description along with a link to the artist’s page.
-QuantumSense
As e^x can be written as the sum of infinite number of polynomials, why can't we consider it inside the field of polynomials? what actually goes wrong?
Polynomials are defined to be finite. Infinite sums are not really sums, but limits of sums, so you need a mathematical structure that allows you to construct limits to define them. Inthe vector space of polynomials we usually don't have that defined in the context of linear algebra, precisely because you can have series of polynomials that result in things that are not polynomials, such as sine, cosine and exponential.
But how can infinitely many items be bounded by a space and call it Hilbert space. Infinitely means no bounds is it not ?
In the context of Hilbert spaces, “bounded” refers to the norm (magnitude) of elements within the space rather than limits on the number of elements.
No, infinite does not mean no bound. Consider the sequence 1, 1/2, 1/4, 1/8, …
Clearly has infinitely many terms all bounded below by 0.
If we start with the assumption that the vector space of polynomials is closed, and then find that an infinitely long polynomial comes to e^x, you say that we should conclude that we made a mistake in assuming an infinitely long polynomial was allowed, that something "went wrong" when we did it. But couldn't we just as easily conclude that e^x is part of the vector space of polynomials?
e^x isn’t a polynomial, so if we mean “the vector space consisting exactly of all the polynomials”, e^x isn’t in it.
But we could perhaps take some completion of the space which would include it?
@@drdca8263 What I'm asking is why can't we conclude that e^x IS a polynomial?
@@badlydrawnturtle8484 Do you mean by redefining “polynomial”, or just concluding that it is one in the existing definition of polynomial?
We can’t conclude that because it doesn’t follow.
And, it can’t follow, because the conclusion is false:
Any non-constant polynomial in one variable has at least one complex root. The e^x function has no roots, and is non-constant. Therefore it is not a polynomial.
@@badlydrawnturtle8484 Do you know the definition of a polynomial? If you know the definition, then you know why we cannot conclude exp(x) is a polynomial. exp(x) is an example of a power series, but not of a polynomial.
Thank you!
thank you for these videos !
is it converging 0:48 ? hmm
What a video!
Maravillosa explicación. Gracias.
Instant subb!!! great video
Great video :)
Thank you. Well illustrated. Even a non physicist can follow....
I love your videos, I've learnt a lot!
In the definition of a vector space we defined the coefficients to be scalars. Shouldn’t the definition of the Hilbert space also include the fact that the coefficients are extended to be complex numbers?
Hello! Thank you for watching.
And no not necessarily. A Hilbert space does not necessarily need complex number coefficients. As an example, R2 is a Hilbert space, yet the underlying scalar field is real. So being a Hilbert space says nothing about whether or not your vector space has complex coefficients, although we will see that we need complex numbers in quantum mechanics when we derive the schrodinger equation.
-QuantumSense
@@quantumsensechannel THANK YOU VERY MUCH! Your channel is awesome. My regret is that UA-cam allows me to give you only one thumbs up. 👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍
So does this mean Hilbert Space is a subspace of a Vector Space? It contains only those linear combinations which converge?
No. What it means is that only some vector spaces are Hilbert spaces. A vector space is not required to have infinite linear combinations. A Hilbert space does.
@@angelmendez-rivera351 Thanks for the reply
Zeno's paradox stuff: It's the same as 0.99999 repeating being less that one, or not, and deciding if and when it matters.
Mathematician stands at the edge of the 0.9999/1.000 cliff: Which leg is at the bottom of the cliff, and does her head hurt?
What was mainly conveyed in my course was that it's all about lettuces.
Great video
i have a quantum mechanics exam in a week if you could release more videos asap that would mean the world
this is great !
Great! 😊
@4:10 what? Nothing "went wrong". All numbers are concepts, not just "infinity". In QM you are not interesting in staying within the space of finite polynomials. You want the transcendentals, but you want them having _physical_ boundary conditions, that is, empirically correct asymptotics. That's why an exponential never arises, because (ignoring gravity singularities) physical wave-functions never blow up at spatial infinity.
In fact, check out Carl Bender's lectures. Sometimes you even _want_ expressions to blow up to infinity, as series approximations, because then they sum faster (computationally) to physical finite values using asymptotic methods for the analytic continuations on the complex plane. The whole obsession by a few freaks with "finitism" and "discrete" physics is nuts. I'd say "completely nuts" but that'd be wrong, because non-trivial spacetime topology can give us discrete physics --- via the homotopy structure, not the holonomic structure --- even while spacetime is (or can be) still an ideal continuum.
What is the integral of 1/syrup? Natural Log Syrup.
thank you
Awesome video.
Does that mean the H space is countable yet if we didn’t include that last rule it would be uncountable?
I think calculus on surreal numbers will soon be mature enough to eliminate the problems with infinity in reals and complex numbers defined in the classical way.
Just because e^x isn't a polynomial doesn't mean it's outside of the vector space. e^x is a linear combination of powers of x and so in the x^n basis e^x is a vector in that space.
In that case, the vector space is the space of polynomials. e^x is not a polynomial, therefore, it is outside that vector space.
The problem is that that linear combinations are, in essence, finite. That is so because they are defined with respect to adition, which is a binary operation (that is, it takes 2 vectors into 1 vector). Aplying adition repeatedly allows us to make a linear combinations with however many terms we want in it, but it does not allow us to make an infinite linear combination.
To define infinite linear combinations, you need to mix Linear Algebra with Calculus, that means you have to be able to define and take limits of sums in vector spaces.
This doesn't mean, however, that vector spaces (that are closed under adition) will be closed with respect these limits that we are taking when summing up to infinity.
@@cauebonassi9225 Thank you fir the reply! There's something I'm not understanding at a fundamental level. How is taking an infinitely long linear combination of powers of x fundamentally different that a finite length sum of powers of x? if every power of x represents a basis vector then e^x would require an infinite dimensional space but don't the basic rules of lin alg hold (dot product, norms, etc.)?
Can you share video code??