For people who want to know more, what Michael Penn is hinting at is called Representation Theory. One very popular line of attack to classify mathematical structures is to represent them as compositions of linear transformations in vector spaces. In many cases of interest, you can prove that if you cannot find a representation with certain properties, then it means that the thing you are trying to study does not have an important property. And since studying representations is much easier than studying the abstract structure, it simplifies things a lot. That's how Fermat's Last Theorem was ultimately conquered. They reduced the problem to the nonexistence of a given structure, and through some long arguments could reduce it to properties of the representations, which could be brute forced to prove no solution would exist.
"In many cases of interest, you can prove that if you cannot find a representation with certain properties, then it means that the thing you are trying to study does not have an important property" is it some kind of theorem? could yout throw some keywords so I could learn more?
@@Wielorybkek Look at the Cartan subalgebra of a finite dimensional Lie algebra, in particular it's defined in terms of a property of a representation of a Lie algebra. It's the properties of the Cartan subalgebra that allow us to classify finite dimensional Lie algebras. This is widely considered one of the most powerful results in Lie theory.
I'm reminded of linear filtering done in frequency-like domains instead of convolution in the original domain. Transform your filter kernel, transform your signal, pointwise multiply, inverse transform. Where "transform" is something like the FFT. You can do this operation in blocks and then there are various methods with various tradeoffs used to "stitch" together the inverse transform blocks.
"If you can reduce a mathematical problem to a problem in linear algebra, you can most likely solve it, provided you know enough linear algebra". This was a quote in the preface of Linear Algebra and its Applications by the great mathematician Peter D. Lax. It was my first book on the subject and that sentence stuck with me ever since
But for real-life modelling, if you can reduce a a problem to a problem in linear algebra, it very likely means that your understanding of the problem is wrong or you have made a huge mistake in your modelling.
@@trevoro.9731bro people have sent rockets to other planets, built trillion dollar search engines, and built compression algorithms you're using to watch this UA-cam video right now with linear algebra. Huh?
@@arthurswanson3285 First of all, you are talking about data abstraction, not real processes modelling. I'm not sure about rockets, as the it involves non-linear processes modelling. As well as electronics and other things.
My quantum mechanics professor once mentioned that "we are very lucky that the fundamental laws of nature are expressed using the language of linear algebra". This video really changed my perspective on this matter...
An old school perspective on this is that classical mechanics rapidly gives rise to nonlinear differential equations, like the pendulum theta'' = -sin(theta), but the dynamics of quantum systems are always linear equations (time derivative of a state is equal to a matrix applied to the state). The traditional explanation is that the dimension of the matrix grows exponentially with the number of particles, and a compact nonlinear equation is in some sense better to work with than an exponentially large linear set of equations. But yes it could be that quantum mechanics is linear because that's the only part of it we can access (like a tangent approximation to a full theory, Hilbert spaces as approximations to Kahler manifolds).
I hate that way of thinking of things when people mean it. It's kind of like how the question of is light a wave or a particle doesn't make sense as a question. People take these conceptions of things that are systematized that are useful in describing reality which is immanent and run with them getting further and further away from the point or science. It's like how since science has become high status, now a bunch of people rush in an refer to the science or the form of science (sometimes not even that) without the essence.
Algebra is a really important subject of math and almost everything in algebra can be understood with linear algebra via representation theory this makes linear algebra a really powerful tool!
@@thirdeyeblind6369Sadly, there isn't yet. I had the objetive of translating It myself, but I didn't yet. The most fundamental concept there exposed is the nilpotent arithmetic space of order N. It's algebra behaves as a finite dimensional projection of the infinite algebra of Arithmetic operations. When I was researching this, used this finiteness while applying abstract harmonic analysis and Gelfand Theory to obtain trigonometric representations of functionals defined on the group of invertible arithmetic operations os these algebras. In particular, we can use this to represent any function obtained by additive and multiplicative convolutions, as the Mertens functions. However, despite the simplicity of the method, I didn't see how this could help us bound the Mertens functions. Stopped working on the subject a few months ago.
I think when you learn linear algebra as a student you don't really get this impression, but I think it's an important thing to realise when you study mathematics, that linear algebra is just conceptually extremely easy and is basically "solved" as a subject
When I was studying it, it felt like we were just doing different versions of the same matrix multiplication/addition/subtraction at the beginning of the course, but obscured through vocabulary and proofs.
So true, in my path to a physics degree, we really didn't spend lots of time on linear algebra. I was always fascinated by it and appreciate these videos. Once I retire it will be fun to study this again. Thanks for this video!
Not true. Particularly Applied Linear Algebra. “Applied” here still means pure mathematics: e.g., solving a linear system more efficiently than standard techniques under certain conditions. An extremely vibrant field. Actually used in AI, for example. (Which is a true “application” of pure mathematics).
This is so wildly untrue that I can't tell if you're trolling or just never actually studied linear algebra beyond introductory material and have mistakenly concluded that the introduction is all there is. Either way I encourage you (and anyone else interested in math) to study linear algebra. It's an exceptionally fun research area where you're never lacking in support due to its utility both in and outside of mathematics.
I've done a lot of numerical analysis in a long career. I've long claimed that 90% of the job is finding the integral transform that maps your impossible problem into linear algebra, and then letting a computer do the linear algebra. If asked what piece of the subroutine libraries that I would re-implement first if I didn't have them, I'd have to say that it's the singular value decomposition. It's the Swiss Army Knife of numerical linear algebra.
I have a question, why is the SVD important outside of your general lossy compressom or least norms and how is it used in these cases, for example you mentioned calculus, any keywords for those methods?
The best application of linear algebra is certainly the functional analysis. It transfroms the mess of differential/integral equations into something really elegant and easy to be used.
At 16:50 should the adjacent matrix be adjusted a bit? It seems to suggest that node 1 is only connected to itself and node 5, missing out the connection to node 2, and that node 2 is connected to itself…
Wow! Great overview! My favorite applications of linear algebra: spherical geometry (makes the equations intuitive), Fourier analysis, multivariate Gaussian distributions, affine transformations of random variables, linear regression, and engineering problems combining some of the above, especially when the matrices can be manipulated to make the solution methods (almost) unreasonably elegant! 🙂
I’m curious to learn more about spherical geometry as linear algebra. Do you have any pointers? Searches like “linear algebra of spherical geometry” hit almost everything related to LA *or* SG but I haven’t found anything related to LA *and* SG.
@@AdrianBoyko I would start with a search for "3D rotation matrices" or just "rotation matrices", and explore from there. That's really what I meant when I said spherical geometry. I rotate to a coordinate system where points of interest are in a plane and then measure distance, etc. in that plane. You might find references to chapters in astronomical calculations text books that deal with the subject. (I'm away from my library for an extended period, or I'd offer a suggestion.) Hope this helps.
@@TomFarrell-p9z Thanks for responding but the spherical geometry I’m interested in is on the 2D surface of a sphere. I’m aware of the applications in astronomy and navigation so I’ve seen all that (:
I found the idea of using Cayley Hamilton to find the four square roots of a 2 by 2 matrix stunning. Because the algebra behind it ultimately is so easy...Linear is a must for every math inclined person.
My favorite application of linear algebra is quantum mechanics. Quantum chemistry basically is a huge eigenvalue problem. If you use a plane wave basis with periodic boundary conditions, you can do some of the calculations much more efficiently in momentum space using a fast Fourier transform.
Besides the adjacency matrix, a graph can be represented using the closely related Laplacian matrix. This has some mind-blowing applications. Like if you take the two eigenvectors corresponding to the two largest eigenvalues (by absolute value) of the Laplacian and use them as arrays of X and Y coordinates for the nodes, you get a really nice 2D representation of the graph that happens to be the solution to a particular optimization problem where the edges are springs.
Thanks, your answer led me to reading a few articles on graph spectral theory, a subject I am more or less innocent about. It looks powerful, I'm gonna have a good time geeking this up!
I remember the first time (Grade 13 High School) I saw the complex numbers, a+bi, represented by a 2x2 real martrix : a -b b a All of the operations on the complex numbers match exactly with matrix operations. So simple and obvious once you see it, but that's what makes it so amazing. Modulus/Determinant, DeMoivre's Theorem, rotation matrix. The list goes on. Even 0 -1 1 0 multiplied by itself by itself gives -1 0 0 -1
Not a mathematician, just like to mess with math some times. Was playing with some ideas for fun and stumbled upon the modular addition and subtraction looking like rotation from a different lens (it was an algebraic approach but no matrices). I was just interested in sets that generate their elements cyclically over a given operation (that might also have an inverse). I was just playing with the idea that for some functions repeated application of the derivative will yield the integral, and just started trying to generalize and maybe extend the idea (I figured if e^x, e^ix, sin(x), and cos(x) are examples I could maybe do interesting stuff if I could get a bit more general). After I had a tentative list of axioms (for the behavior I was interested in) modular addition/subtraction and rotations (clockwise and counterclockwise) ended up among examples I could come up with satisfying them. Maybe one day I'll get to finish playing around with the ideas perhaps, and probably discover I'm going nowhere new and stumble upon something easily googlable most likely.
Days ago, I was thinking of the volume of a slanted/oblique cone. If it was a cylinder, it would be obviously the same as a regular cylinder by thinking a stack of disks pushed sideways. After little digging, it’s called Cavalieri’s Principle and It should work for cones as well. So from other perspective, I tried to write this pushing stack of disks into matrix, which formed something called “shear matrix”. Amazingly, the determinant is 1, meaning there is no impact on the volume!
this is why triangles are 1/2 *b*h regardless of how 'slanted' they are. the basic thing you've (partially) discovered is a shape constant. the shape constant for any rectangular n-dimensional figure is 1, but if you remove material from this to get other shapes then a shape constant comes into play. for ellipses it's pi/4, which relates both the area of the ellipse to the circumscribing rectangle and the circumference of the ellipse to the perimeter of the circumscribing rectangle, though if the ellipse has an eccentricity greater than 0 there's another shape coefficient at play as well. this is pretty well approximated by pi/4 *((1+b/a)^2 +4), or pi/4 * ((b/a)^2 +2b/a +5), where a is the semi-major axis and b is the semi-minor axis of the ellipse. for triangles the shape constant is 1/2, and this only relates to area. for pyramids the shape constant is 1/3, and this only relates to volume. the reason it's 1/2 and 1/3 is that it's the inverse of the number of dimensions the shape lives in, and it's no coincidence that these are identical to the coefficients that appear in the power rule when taking integrals or derivatives of polynomials. since a cone is just a pyramid with a circular base it's volume will always be pi/4 *1/3 of the rectangular prism made of its height and the rectangle circumscribing its base. pi/4 is in fact so much more fundamental than pi that most slide rules that mark something related to pi mark either pi/4 or 4/pi, because it allows finding the areas or volumes of shapes much more easily than trying to use pi usually does.
Dude I got busy with college and stuff so couldn't really visit this channel much, now after two years you're still doing this. Respect the dedication!
Finally a video i could keep up with! Theres a small error in the adjacency matrix on column 2 but this was a great video. I recently used linear algebra to least squares fit a function on a non euclidian manifold. Linear algebra is unreasonably effective even in intrinsically nonlinear spaces lol
The concepts and symbology of linear algebra was not taught to me at school, so in Uni the jump into what the classes were doing hit me like a train. Pushed me out of the course. 10 years later, I took a subject related course (computing) - and it happened again, same problems; would never pass that module so dropped out. This happened to a relative in 2023 - had to quit after 1st semester as he is utterly lost, never used LA, the class already past what he knew and pulling away rapidly. His subject - finance. I went on to complete a PhD in a related topic - but the thing is - why is LA missing from some schools, yet assumed at Uni??
When i started learning determinants, i began wanting to revert the way of determinants back into matrixes. And so i studied them a lot, and have gotten a prototype of using matrixes equality to matrixes to equation equalities but in the reverse order, basically finding all the solutions for a 3 variable equation.
I remember using it for mono directional paths. With the trace of A^n being the number of ways to complete a cycle after n steps. Useful in one of my classes back in 2013.
I like the use of linear algebra to find a closed form expression for the nth Fibonacci number. Solving linear recurrences by turning them to a matrix, diagonalizing them, and computing their powers to give entries of the sequence.
My favorite application of linear algebra is in the Foundations of Mathematics and Number Theory, through the concept of an Arithmetic Space, which I developed, showing a way to study the Peano axioms using linear algebra. Didn't solve the hardest problems yet, though! You can find it in the book Foundations of Formal Arithmetic
@@vyrsh0 It is! Not the most polished book, but in it I introduced the notion of the Algebra of Arithmetic Transformations, which gave a universal foundation for the theory of generating functions directly from Peano Axioms and linear algebra. It's very neat! The most interesting object is the group of invertible Arithmetic Transformations.
My favorite application of linear algebra is : intersecting conics in the 2D euclidean plane. I do euclidean geometry (with python code, sagemath and so on) and because I never found a better way, I use one simple code I never tried to complety undertstand, for getting the intersecting points. It is based on the 3x3 matrix representation of conics. My second application is Cayley conditions for Poncelet configuration. My old third application would be quaternions represented as 3x3 matrix...more number theory, or Hamilton's work review sequel. My fourth application would be in computational geometry graph theory, looks like part end of you video, about paths in a graph (if M adjacency matrix with binary elements then M^n is path lengths n, and M+M^2+... is a matrix with binary elements deciding whether there is ANY path between two vertices ).
In statistics we have eigenvalues representing variances in factor analysis, cholesky decomposition, Jacobians, Hessian matrix, the magical LR hat matrix, variance/covariance matrices.
There was an exercise in a parallel computing text book that I solved with a pretty fun application of linear algebra. It's especially nice to use it to show people why care of linear independence and why care about doing things over general fields F rather than just R or C. Problem was show that all cycles in hyper cubes have a even number of edges. I imagine there's a more straightforward way to do it. But idea is ok you make a boolean vector space. So if you hypercube is d dimensional, take it as binary strings/tuples of size d. Then there's 2 to the d of those tuples. As vector add take component wise XOR and as scalar multiply logical and. Im pretty sure this is just Zmod2 arithmetic but idk if there's something I'm forgetting. Tldr use the standard basis as e_i but again use the Zmod2 arithmetic component wise and as your field, then bam the hypercube is the span of your basis. And the basis vectors are your vertices of the graph. Then the edges of the hyper cube, take the product of the basis/vertices with itself to get all 2 tuples of them and now the edges are the subset of them that when you add them, the result is in the basis/vertex set. Now you're off to the races. Take any cycle of length p. We have to show p is even. The thin we have to work with is that by definition of a cycle, you end where you start. Idea is. Ok how did we define an edge, that when you add those 2 you get a basis vector. So ok you can cook up an equation where let's say v1 is the start; v1 + sum of a bunch of basis vectors = v1. Add v1 both sides again and then vi + vi is equal to the zero vector in this arithmetic since any bit XOR itself is 0. So now you have the sum of a bunch of basis vectors is 0, there may be a bunch of repeats since you can cycle around a trillion times any which way. So ok do some argument to justify collecting them by common ones so you'll have some a1 of e1 + a2 of e1... + ad of ed = zero vector. Only way that happens is if each of the ai is 0 mod 2. So all the ai are even. And any finite sum of even numbers is even. Really fun for pure math folks who only think of linear algebra for use in analysis and really fun for CS/EE types who just think of it as a means to crunch matricies.
Linear algebra can also be thought of as mathematics in the small, i.e., local analysis. A large scale structure like an n-dim smooth manifold looks like a Euclidean space when you zoom in, and voila, you can apply linear algebra!
A simple but elegant application is representing f(x) = (ax + b)/(cx + d) as matrix [[a b][c d]], thus making the space of Möbius functions isomorphic to SL(2) (or to something much more exciting when working in a module over Z instead of a real/complex vector space).
I wonder how much of this is that our puny human brains don't do well with nonlinear concepts. Thus the math that we've gotten good at happens to be the linear stuff. You could imagine an alternate universe in which we are better at nonlinear concepts in which linear math is but a tiny subset of what we focus on.
I must admit to wondering if things really are chaotically linear - linearly chaotic? - with chaos inherent within a system merely be nesting a generator in the system. In other words nested chaotic feedback. Wouldn't it be nice if nature played simply with simple things to beguile us rather than create monsters just out of sight?
I deep dived into lie algebra for quaternions, everything made a lot of sense, it gives me hope we could one day master the nonlinear (although it's a much larger class than linear so maybe it's apples and oranges)
When we define a norm on Lp space using lesbaege integration, the positive definiteness does not hold, since integration of nonzero function almost everywhere equal to 0 is 0. Fortuneately the set of all functions that are almost everywhere equal to 0 is a subspace of Lp space, so a new vector space which is the quotient of Lp space by the subspace can be defined. On this vector space, we can define Lp norm without failing to satisfying the positive definiteness. Quotient space is a power ful concept.
I've always found mathematics to be a great subject and have always respected it for both what it is, and its usefulness. However, in the past I have always had trouble understanding some of it. Here lately, I have spent much more time studying the subject, and have realized that with enough effort, time and determination, you can get there.
Here is a physical application to explore: in geometric optics, represent a light ray by the vector (nu, h), where n = refractive index of medium, u = slope of light ray, h = height at which ray enters a given surface (relative to optical axis). An optical system can be described by composition of matrices: * [[1 P] [0 1]] for refraction, where P is the refractive power of the surface * [[1 0] [-d/n 1]] for travelling through a medium, where d is the horizontal distance For instance, a typical thin lens situation is described as product of four matrices: object distance, entering the lens, leaving the lens, image distance.
One of my professors in grad school---a famous numerical analyst---said that, with maybe a few exceptions like sorting, any applied problem that can't be turned into linear algebra can't be solved at all.
12:00 Ha! This can be obtained from exp(tW)=I+tW+1/2!(tW)^2+... where I is the 2x2 identity matrix and W has first row (0 1) and second row (-1 0), by working out the matrix products of the expansion and identifying that exp(tW)=K(t) where K has first row (cos(t) sin(t)) and second row (-sin(t) cos(t)) from the Iwasawa decomposition, as in Proposition 2.2.5 of Bump's book Automorphic forms and representations.
Many problems in physics can be studied with linear algebra. From the newtonian mechanics to the monsters called quantum field theory and relativity (special and general), linear algebra has been proved as a powerful tool to make predictions about the Nature, because have a great unification power. You can study coupled oscillatory linear systems and find their symmetries under linear transformations; at the end, this implies conserved quantities, according with the Noether's theorem.
I feel like the most intuitive way to think about linear algebra is LTI systems. Ie, amplifiers. Lets say you want to put a sound through an amplifier, and mix it with another one. Then it doent matter if you mix the two sounds before or after you put them through the amplifier. Thats all linearity means. Now lets say youre dr dre and want to pump the base. What is an arbitrary sound going to look like after being put through the amplifier? No idea. But each individual frequency is just going to be multiplied or phase shifted by some number. Therefore the frequencies are eigenvectors of the amplifier. Thats all eigenvector means. The eigenvalues are just the multiplications and phase shifts. So you can simplify the calculations of whats going to happen to a sound by transforming to a frequency basis and doing your calculations there. Thats all matrix diagonalisation is.
BTW: It's interesting to calculate Eigenvalues and Eigenvectors of this matrix... For example the 5th EW is λ_5 = 0 with EV v_5 = (0, -1, 1, 0, 1). On the other hand, the first EW is the biggest EW λ_1 = (1+sqrt(13))/2 with EV v_1 = (λ_1, 2, 1, λ_1, 1). The other Eigenvalues are λ_2 = (1+sqrt(5))/2, λ_3 = (1-sqrt(13))/2, and λ_4 = (1-sqrt(5))/2. But it some cases, it may be better to ignore the reflexions, then λ_1 = - sqrt(3), λ_2 = sqrt(3), λ_3 = -1, λ_4 = 1, and λ_5 = 0.
Thanks for this lecture. The calculus application was something I never learned about before. A real eye-opener, that. In the first example, how do we know that the 4 trig-based functions actually span a 4 D space? What about linear independence? In the graph theory application, it seems to me that vtx 1 is also connected to vtx 2, which you did not include in the matrix.
I think that you need to proof that the span is also a base; that means that you need to proof that those 4 functions are lineary independent. They are so this span is also a base.
I don’t think that these examples show us “unreasonable effectiveness”, there effectiveness is very reasonable. Spaces of smooth functions naturally have structure of vector spaces and linear differential equations by definition rise from linear operators on these spaces. Same story with groups. On the one hand they have strong connections with rings (because there is construction of group ring, and group action of G ZG module structure) and so with modules over the rings (theory of modules of rings is generalization of linear algebra). On the other hand, vector spaces have natural action of automorphism group (also known as GL - general linear group) and for every group G we can find big enough space V and build faithful representation G -> GL(V) (“vectorification” of Cayley theorem for groups). That’s why connections with group theory and theory of differential equations are not surprising What is REALLY surprising, that linear algebra helps us to solve a lot of problems from discrete maths. For example weak Berge conjecture (graph is perfect complement of graph is perfect) has linear algebraical proof. Also spectral theory of graphs studies spectra of graph matrices (purely combinatorial construction, it’s hard to see algebraic meaning in it) and gives us results about (for example) regular graphs inner structure. This is what we really can call “unreasonable effectiveness of linear algebra”. Sorry for mistakes, English is not my native language
Unbelievable would make better sense here than unreasonable. Linear algebra is highly effective, so effective that at times it may be difficult to believe exactly how highly effective.
I remember taking linear algebra in college and it opening up amazing possibilities in computer graphics but this is pretty dense. I'm saying this 4 minutes in and will continue watching to see if I get what he's saying before applying for a job with the sponsor 😅😂😂🤣
My Differential Equations lecturer said that humans are so stupid that we are only capable of computing linear things. Out of all the possibilities, our brain only works for A(x+y)=A(x)+A(y), so we have developed our mathematics in the lines of the "developable", which is relating everything with linear algebra. For answering "why is linear algebra so useful?", try to imagine anything not using linearity, and no progress will be achieved, so nobody would study it, nobody would care about it and we would forget about it. So, the usefulness of linear algebra is a kind of survival bias
📝 Summary of Key Points: 📌 Linear algebra provides powerful tools for analyzing mathematical structures and gaining a deeper understanding of them. It allows for the translation of mathematical concepts into the language of linear algebra, enabling their study using linear algebra techniques. 🧐 In the first example, a four-dimensional real vector space spanned by four functions is examined. By representing this structure as a matrix, the derivative of the functions can be analyzed, and the matrix can be used to find the anti-derivative of a function. 🚀 The second example explores how a group can be represented in linear algebra using matrices. The group ZN is represented as 2x2 matrices with real entries, and addition in ZN is represented as matrix multiplication. This representation allows for the study of groups using linear algebra techniques. 📊 Linear algebra has applications in data science and machine learning. Data can be encoded into matrices, and matrix factorization techniques can be used to analyze the data. Examples include encoding images and representing networks or graphs using adjacency matrices. 💡 Additional Insights and Observations: 💬 "Linear algebra provides a powerful framework for studying mathematical structures and solving problems in various fields." 🌐 The video references the use of linear algebra in data science and machine learning, highlighting its practical applications in these areas. 📣 Concluding Remarks: Linear algebra is a versatile and effective tool for studying mathematical structures and solving problems. By translating mathematical concepts into the language of linear algebra, we can gain a deeper understanding and apply powerful techniques. From analyzing derivatives and integrals to representing groups and encoding data, linear algebra plays a crucial role in various fields of study. Generated using Talkbud (Browser Extension)
So I had been recommended the study of LA *before* Calculus 3 so that an understanding of the chain rule as a simplified form of The Jacobian could be ascertained.
If you ever browse over the "attention" paper on the transformers architecture in LLMs, the sentence about positional encoding that goes "... for any fixed offset k, PE_{pos+k} can be represented as a linear function of PE_{pos}" has some relation to the first application in this video.
Could you theoretically use any periodic function instead of sin and cos in the modulo sum example? My thinking is you need pure sine and cosine to get a steady tick around the circle, but with other periodic functions you could have some sort of interesting "weighting function" to the inputs. With sine and cosine, its pure and steady, but with something like a triangle wave or a compound sine wave, you could induce some very strange behavior.
Finding the integral by finding the inverse of the derivative matrix was kind of mind-blowing. I know your example was chosen to make it simple, but can this be generally applied as a way to compute integrals?
I remember being mind-blown just like you and wondering the same thing when I first saw this. I went down the rabbit hole of functional analysis and differential equations and I'm still digging. A neat way of thinking about this problem is to view it as solving the differential equation Df=g for the antiderivative f of the function g, and D is the derivative operator. Since differentiation is linear, Df=g now looks like a linear algebra problem where D is a linear operator, and f and g are vectors. In fact, in the context of functional analysis, functions are vectors belonging to vector spaces of functions appropriately called function spaces. If we can find a finite basis for the function space of f and g, then we can represent D as a matrix, g as a coordinate vector, and solve for f by matrix multiplying D inverse with g just like in the video. In general, these function spaces could be infinite dimensional and there's not always a useful basis to represent them, but the field of functional analysis has classified many kinds of function spaces with a variety of useful bases for solving differential equations.
Hi, how would you encode the +C ? You can point me to reference or calculate directly here, I will comprehend using notation. Also, never saw the integral vector matrix notation but follow you very easily. MS CS with BS CS (ML) BA Math (Stats). Wanted to take more real/complex/advanced matrix theory but ran out of time and had to get my career started before I hit 35. Love staying sharp with this content early mornings. TYVM❤
In Soviet Russia, I forgot the name of the economic advisor but he used linear algebra to balance the productions from cities to maximize profits and minimize waste. Also I think about linear algebra all the time while I sit at strings of red lights in traffic every day going to class 🤬😡
Isn't there a mistake in the matrix at 14:20 and following. The matrix shows no connection between points 1 and 2 but a connection between 2 and itself.
To apply for an open position with MatX, visit www.matx.com/jobs.
isn`t this website truly for math....?
and Is there any age restriction on this website for applying...?
Truly a video a representation theorist would've made
Thank you for this post. I didn't even know Linear Algebra could do so much. I knew it was a great math but I was just ignorant as to how great it is.
Wow. Thanks! Just the kind of work I'm looking for.
The explanations could have been clearer.
For people who want to know more, what Michael Penn is hinting at is called Representation Theory. One very popular line of attack to classify mathematical structures is to represent them as compositions of linear transformations in vector spaces. In many cases of interest, you can prove that if you cannot find a representation with certain properties, then it means that the thing you are trying to study does not have an important property. And since studying representations is much easier than studying the abstract structure, it simplifies things a lot.
That's how Fermat's Last Theorem was ultimately conquered. They reduced the problem to the nonexistence of a given structure, and through some long arguments could reduce it to properties of the representations, which could be brute forced to prove no solution would exist.
"In many cases of interest, you can prove that if you cannot find a representation with certain properties, then it means that the thing you are trying to study does not have an important property"
is it some kind of theorem? could yout throw some keywords so I could learn more?
@@Wielorybkek Look at the Cartan subalgebra of a finite dimensional Lie algebra, in particular it's defined in terms of a property of a representation of a Lie algebra.
It's the properties of the Cartan subalgebra that allow us to classify finite dimensional Lie algebras. This is widely considered one of the most powerful results in Lie theory.
Representation theory will be my thesis. Writing about rep theory of special linear groups.
that was a nice comment! gonna look into it! vlwsss! =D
I'm reminded of linear filtering done in frequency-like domains instead of convolution in the original domain. Transform your filter kernel, transform your signal, pointwise multiply, inverse transform. Where "transform" is something like the FFT. You can do this operation in blocks and then there are various methods with various tradeoffs used to "stitch" together the inverse transform blocks.
"If you can reduce a mathematical problem to a problem in linear algebra, you can most likely solve it, provided you know enough linear algebra". This was a quote in the preface of Linear Algebra and its Applications by the great mathematician Peter D. Lax. It was my first book on the subject and that sentence stuck with me ever since
But for real-life modelling, if you can reduce a a problem to a problem in linear algebra, it very likely means that your understanding of the problem is wrong or you have made a huge mistake in your modelling.
@@trevoro.9731bro people have sent rockets to other planets, built trillion dollar search engines, and built compression algorithms you're using to watch this UA-cam video right now with linear algebra. Huh?
@@arthurswanson3285 First of all, you are talking about data abstraction, not real processes modelling. I'm not sure about rockets, as the it involves non-linear processes modelling. As well as electronics and other things.
🤓 My first linear algebra book was the very easy to read, Grossman 🤗🤗🤓🤓🤓
Wonderful sentence. I think of a lot of math as spatial analogies.
My quantum mechanics professor once mentioned that "we are very lucky that the fundamental laws of nature are expressed using the language of linear algebra". This video really changed my perspective on this matter...
An old school perspective on this is that classical mechanics rapidly gives rise to nonlinear differential equations, like the pendulum theta'' = -sin(theta), but the dynamics of quantum systems are always linear equations (time derivative of a state is equal to a matrix applied to the state). The traditional explanation is that the dimension of the matrix grows exponentially with the number of particles, and a compact nonlinear equation is in some sense better to work with than an exponentially large linear set of equations. But yes it could be that quantum mechanics is linear because that's the only part of it we can access (like a tangent approximation to a full theory, Hilbert spaces as approximations to Kahler manifolds).
I hate that way of thinking of things when people mean it. It's kind of like how the question of is light a wave or a particle doesn't make sense as a question. People take these conceptions of things that are systematized that are useful in describing reality which is immanent and run with them getting further and further away from the point or science. It's like how since science has become high status, now a bunch of people rush in an refer to the science or the form of science (sometimes not even that) without the essence.
Algebra is a really important subject of math and almost everything in algebra can be understood with linear algebra via representation theory this makes linear algebra a really powerful tool!
If anyone here also likes Number Theory, look up the concept of Arithmetic Space which I invented in my book, Foundations of Formal Arithmetic.
@@raphaelreichmannrolim25 Is there an English version of your Masters Thesis? I am afraid the only copy I can find is in Brazilian Portuguese.
@@thirdeyeblind6369Sadly, there isn't yet. I had the objetive of translating It myself, but I didn't yet. The most fundamental concept there exposed is the nilpotent arithmetic space of order N. It's algebra behaves as a finite dimensional projection of the infinite algebra of Arithmetic operations. When I was researching this, used this finiteness while applying abstract harmonic analysis and Gelfand Theory to obtain trigonometric representations of functionals defined on the group of invertible arithmetic operations os these algebras. In particular, we can use this to represent any function obtained by additive and multiplicative convolutions, as the Mertens functions. However, despite the simplicity of the method, I didn't see how this could help us bound the Mertens functions. Stopped working on the subject a few months ago.
I think when you learn linear algebra as a student you don't really get this impression, but I think it's an important thing to realise when you study mathematics, that linear algebra is just conceptually extremely easy and is basically "solved" as a subject
When I was studying it, it felt like we were just doing different versions of the same matrix multiplication/addition/subtraction at the beginning of the course, but obscured through vocabulary and proofs.
So true, in my path to a physics degree, we really didn't spend lots of time on linear algebra. I was always fascinated by it and appreciate these videos. Once I retire it will be fun to study this again. Thanks for this video!
I pased linear algebra and I always woundered what real mathematicians do with representation teory, i had no idé I was soo close!
Not true. Particularly Applied Linear Algebra. “Applied” here still means pure mathematics: e.g., solving a linear system more efficiently than standard techniques under certain conditions. An extremely vibrant field. Actually used in AI, for example. (Which is a true “application” of pure mathematics).
This is so wildly untrue that I can't tell if you're trolling or just never actually studied linear algebra beyond introductory material and have mistakenly concluded that the introduction is all there is.
Either way I encourage you (and anyone else interested in math) to study linear algebra. It's an exceptionally fun research area where you're never lacking in support due to its utility both in and outside of mathematics.
I've done a lot of numerical analysis in a long career. I've long claimed that 90% of the job is finding the integral transform that maps your impossible problem into linear algebra, and then letting a computer do the linear algebra. If asked what piece of the subroutine libraries that I would re-implement first if I didn't have them, I'd have to say that it's the singular value decomposition. It's the Swiss Army Knife of numerical linear algebra.
I have a question, why is the SVD important outside of your general lossy compressom or least norms and how is it used in these cases, for example you mentioned calculus, any keywords for those methods?
Would you be willing to recommend books on this?
The best application of linear algebra is certainly the functional analysis. It transfroms the mess of differential/integral equations into something really elegant and easy to be used.
If you are also interested in Number Theory, look up the concept of Arithmetic Space which I invented in my book, Foundations of Formal Arithmetic.
At 16:50 should the adjacent matrix be adjusted a bit? It seems to suggest that node 1 is only connected to itself and node 5, missing out the connection to node 2, and that node 2 is connected to itself…
Yep I think so too
Thank for your post - at first viewing I had involuntarily stopped listening as Row_n = Column_n for the obvious 1
And second colum should be 10010 because 2 is connected to 1 and 4.
Many thanks for this video. Short remark: Second colum should be 10010 because 2 is connected to 1 and 4.
Yes. Node 2 is connected to 1 and 4, not itself and 4.
That first example is so refreshing!
Wow! Great overview! My favorite applications of linear algebra: spherical geometry (makes the equations intuitive), Fourier analysis, multivariate Gaussian distributions, affine transformations of random variables, linear regression, and engineering problems combining some of the above, especially when the matrices can be manipulated to make the solution methods (almost) unreasonably elegant! 🙂
I’m curious to learn more about spherical geometry as linear algebra. Do you have any pointers? Searches like “linear algebra of spherical geometry” hit almost everything related to LA *or* SG but I haven’t found anything related to LA *and* SG.
@@AdrianBoyko I would start with a search for "3D rotation matrices" or just "rotation matrices", and explore from there. That's really what I meant when I said spherical geometry. I rotate to a coordinate system where points of interest are in a plane and then measure distance, etc. in that plane. You might find references to chapters in astronomical calculations text books that deal with the subject. (I'm away from my library for an extended period, or I'd offer a suggestion.) Hope this helps.
@@TomFarrell-p9z Thanks for responding but the spherical geometry I’m interested in is on the 2D surface of a sphere. I’m aware of the applications in astronomy and navigation so I’ve seen all that (:
I found the idea of using Cayley Hamilton to find the four square roots of a 2 by 2 matrix stunning. Because the algebra behind it ultimately is so easy...Linear is a must for every math inclined person.
My favorite application of linear algebra is quantum mechanics. Quantum chemistry basically is a huge eigenvalue problem. If you use a plane wave basis with periodic boundary conditions, you can do some of the calculations much more efficiently in momentum space using a fast Fourier transform.
Besides the adjacency matrix, a graph can be represented using the closely related Laplacian matrix. This has some mind-blowing applications. Like if you take the two eigenvectors corresponding to the two largest eigenvalues (by absolute value) of the Laplacian and use them as arrays of X and Y coordinates for the nodes, you get a really nice 2D representation of the graph that happens to be the solution to a particular optimization problem where the edges are springs.
Thanks, your answer led me to reading a few articles on graph spectral theory, a subject I am more or less innocent about. It looks powerful, I'm gonna have a good time geeking this up!
What is the optimization problem called?
I remember the first time (Grade 13 High School) I saw the complex numbers, a+bi, represented by a 2x2 real martrix :
a -b
b a
All of the operations on the complex numbers match exactly with matrix operations. So simple and obvious once you see it, but that's what makes it so amazing. Modulus/Determinant, DeMoivre's Theorem, rotation matrix. The list goes on. Even
0 -1
1 0 multiplied by itself by itself gives
-1 0
0 -1
Not a mathematician, just like to mess with math some times. Was playing with some ideas for fun and stumbled upon the modular addition and subtraction looking like rotation from a different lens (it was an algebraic approach but no matrices). I was just interested in sets that generate their elements cyclically over a given operation (that might also have an inverse). I was just playing with the idea that for some functions repeated application of the derivative will yield the integral, and just started trying to generalize and maybe extend the idea (I figured if e^x, e^ix, sin(x), and cos(x) are examples I could maybe do interesting stuff if I could get a bit more general). After I had a tentative list of axioms (for the behavior I was interested in) modular addition/subtraction and rotations (clockwise and counterclockwise) ended up among examples I could come up with satisfying them. Maybe one day I'll get to finish playing around with the ideas perhaps, and probably discover I'm going nowhere new and stumble upon something easily googlable most likely.
Days ago, I was thinking of the volume of a slanted/oblique cone. If it was a cylinder, it would be obviously the same as a regular cylinder by thinking a stack of disks pushed sideways.
After little digging, it’s called Cavalieri’s Principle and It should work for cones as well. So from other perspective, I tried to write this pushing stack of disks into matrix, which formed something called “shear matrix”. Amazingly, the determinant is 1, meaning there is no impact on the volume!
this is why triangles are 1/2 *b*h regardless of how 'slanted' they are.
the basic thing you've (partially) discovered is a shape constant. the shape constant for any rectangular n-dimensional figure is 1, but if you remove material from this to get other shapes then a shape constant comes into play. for ellipses it's pi/4, which relates both the area of the ellipse to the circumscribing rectangle and the circumference of the ellipse to the perimeter of the circumscribing rectangle, though if the ellipse has an eccentricity greater than 0 there's another shape coefficient at play as well. this is pretty well approximated by pi/4 *((1+b/a)^2 +4), or pi/4 * ((b/a)^2 +2b/a +5), where a is the semi-major axis and b is the semi-minor axis of the ellipse. for triangles the shape constant is 1/2, and this only relates to area. for pyramids the shape constant is 1/3, and this only relates to volume. the reason it's 1/2 and 1/3 is that it's the inverse of the number of dimensions the shape lives in, and it's no coincidence that these are identical to the coefficients that appear in the power rule when taking integrals or derivatives of polynomials.
since a cone is just a pyramid with a circular base it's volume will always be pi/4 *1/3 of the rectangular prism made of its height and the rectangle circumscribing its base.
pi/4 is in fact so much more fundamental than pi that most slide rules that mark something related to pi mark either pi/4 or 4/pi, because it allows finding the areas or volumes of shapes much more easily than trying to use pi usually does.
17:57 Actually, that would be graph theory. But I also like to show the Fibonacci formulas with matrices!
18:01 Good Place To Stop
How long have you been doing this by now actually?
@@einbatixx4874 I’ve thought it was around 2 years, but actually it’s 3 years and half 🤯
Dude I got busy with college and stuff so couldn't really visit this channel much, now after two years you're still doing this. Respect the dedication!
@@einbatixx4874 "doing this" doing what?
Finally a video i could keep up with! Theres a small error in the adjacency matrix on column 2 but this was a great video. I recently used linear algebra to least squares fit a function on a non euclidian manifold. Linear algebra is unreasonably effective even in intrinsically nonlinear spaces lol
The concepts and symbology of linear algebra was not taught to me at school, so in Uni the jump into what the classes were doing hit me like a train. Pushed me out of the course. 10 years later, I took a subject related course (computing) - and it happened again, same problems; would never pass that module so dropped out.
This happened to a relative in 2023 - had to quit after 1st semester as he is utterly lost, never used LA, the class already past what he knew and pulling away rapidly. His subject - finance.
I went on to complete a PhD in a related topic - but the thing is - why is LA missing from some schools, yet assumed at Uni??
Fabulous!!!! I am so excited to explore all of this stuff more deeply!
In a bunch of classes we would reduce parts of the problems to Linear Algebra and the proof would then be written as "proof via LinAlg"
When i started learning determinants, i began wanting to revert the way of determinants back into matrixes. And so i studied them a lot, and have gotten a prototype of using matrixes equality to matrixes to equation equalities but in the reverse order, basically finding all the solutions for a 3 variable equation.
I remember using it for mono directional paths. With the trace of A^n being the number of ways to complete a cycle after n steps. Useful in one of my classes back in 2013.
So hard to choose one application as my favorite, but I think Markov chains is definitely high up on my list.
I like the use of linear algebra to find a closed form expression for the nth Fibonacci number. Solving linear recurrences by turning them to a matrix, diagonalizing them, and computing their powers to give entries of the sequence.
I regret not taking linear algebra in college. I was a biology major but I love math and physics. I should've taken them as electives.
Learn it now
Check out Linear Algebra done right videos or Gilbert Strang lectures... not too late (just gotta devote Saturday mornings)
My favorite application of linear algebra is in the Foundations of Mathematics and Number Theory, through the concept of an Arithmetic Space, which I developed, showing a way to study the Peano axioms using linear algebra. Didn't solve the hardest problems yet, though! You can find it in the book Foundations of Formal Arithmetic
is that your book?
@@vyrsh0 It is! Not the most polished book, but in it I introduced the notion of the Algebra of Arithmetic Transformations, which gave a universal foundation for the theory of generating functions directly from Peano Axioms and linear algebra. It's very neat! The most interesting object is the group of invertible Arithmetic Transformations.
The modulo sum to matrix multiplication blew my mind. I wish I'd known that years ago.
My favorite application of linear algebra is : intersecting conics in the 2D euclidean plane. I do euclidean geometry (with python code, sagemath and so on) and because I never found a better way, I use one simple code I never tried to complety undertstand, for getting the intersecting points. It is based on the 3x3 matrix representation of conics. My second application is Cayley conditions for Poncelet configuration. My old third application would be quaternions represented as 3x3 matrix...more number theory, or Hamilton's work review sequel. My fourth application would be in computational geometry graph theory, looks like part end of you video, about paths in a graph (if M adjacency matrix with binary elements then M^n is path lengths n, and M+M^2+... is a matrix with binary elements deciding whether there is ANY path between two vertices ).
In statistics we have eigenvalues representing variances in factor analysis, cholesky decomposition, Jacobians, Hessian matrix, the magical LR hat matrix, variance/covariance matrices.
Wow!!! I never saw this connection before. Integrating, by using and inverse matrix!!! So awesome. Thank you!
I like linear algebra, it's straight to the point.
There was an exercise in a parallel computing text book that I solved with a pretty fun application of linear algebra. It's especially nice to use it to show people why care of linear independence and why care about doing things over general fields F rather than just R or C. Problem was show that all cycles in hyper cubes have a even number of edges.
I imagine there's a more straightforward way to do it. But idea is ok you make a boolean vector space. So if you hypercube is d dimensional, take it as binary strings/tuples of size d. Then there's 2 to the d of those tuples.
As vector add take component wise XOR and as scalar multiply logical and. Im pretty sure this is just Zmod2 arithmetic but idk if there's something I'm forgetting.
Tldr use the standard basis as e_i but again use the Zmod2 arithmetic component wise and as your field, then bam the hypercube is the span of your basis. And the basis vectors are your vertices of the graph. Then the edges of the hyper cube, take the product of the basis/vertices with itself to get all 2 tuples of them and now the edges are the subset of them that when you add them, the result is in the basis/vertex set.
Now you're off to the races. Take any cycle of length p. We have to show p is even. The thin we have to work with is that by definition of a cycle, you end where you start. Idea is. Ok how did we define an edge, that when you add those 2 you get a basis vector. So ok you can cook up an equation where let's say v1 is the start; v1 + sum of a bunch of basis vectors = v1. Add v1 both sides again and then vi + vi is equal to the zero vector in this arithmetic since any bit XOR itself is 0. So now you have the sum of a bunch of basis vectors is 0, there may be a bunch of repeats since you can cycle around a trillion times any which way. So ok do some argument to justify collecting them by common ones so you'll have some a1 of e1 + a2 of e1... + ad of ed = zero vector. Only way that happens is if each of the ai is 0 mod 2. So all the ai are even. And any finite sum of even numbers is even.
Really fun for pure math folks who only think of linear algebra for use in analysis and really fun for CS/EE types who just think of it as a means to crunch matricies.
Machine Learning is a powerful application of linear algebra in the IT ecosystem.
Linear algebra can also be thought of as mathematics in the small, i.e., local analysis. A large scale structure like an n-dim smooth manifold looks like a Euclidean space when you zoom in, and voila, you can apply linear algebra!
A simple but elegant application is representing f(x) = (ax + b)/(cx + d) as matrix [[a b][c d]], thus making the space of Möbius functions isomorphic to SL(2) (or to something much more exciting when working in a module over Z instead of a real/complex vector space).
I wonder how much of this is that our puny human brains don't do well with nonlinear concepts. Thus the math that we've gotten good at happens to be the linear stuff. You could imagine an alternate universe in which we are better at nonlinear concepts in which linear math is but a tiny subset of what we focus on.
I must admit to wondering if things really are chaotically linear - linearly chaotic? - with chaos inherent within a system merely be nesting a generator in the system.
In other words nested chaotic feedback.
Wouldn't it be nice if nature played simply with simple things to beguile us rather than create monsters just out of sight?
I deep dived into lie algebra for quaternions, everything made a lot of sense, it gives me hope we could one day master the nonlinear (although it's a much larger class than linear so maybe it's apples and oranges)
our puny human brains can't even formulate the question to the answer "42" 😅
When we define a norm on Lp space using lesbaege integration, the positive definiteness does not hold, since integration of nonzero function almost everywhere equal to 0 is 0. Fortuneately the set of all functions that are almost everywhere equal to 0 is a subspace of Lp space, so a new vector space which is the quotient of Lp space by the subspace can be defined. On this vector space, we can define Lp norm without failing to satisfying the positive definiteness.
Quotient space is a power ful concept.
That is indeed a wonderful picture that you've drawn, thanks for the video!
I've always found mathematics to be a great subject and have always respected it for both what it is, and its usefulness. However, in the past I have always had trouble understanding some of it. Here lately, I have spent much more time studying the subject, and have realized that with enough effort, time and determination, you can get there.
Here is a physical application to explore: in geometric optics, represent a light ray by the vector (nu, h), where n = refractive index of medium, u = slope of light ray, h = height at which ray enters a given surface (relative to optical axis). An optical system can be described by composition of matrices:
* [[1 P] [0 1]] for refraction, where P is the refractive power of the surface
* [[1 0] [-d/n 1]] for travelling through a medium, where d is the horizontal distance
For instance, a typical thin lens situation is described as product of four matrices: object distance, entering the lens, leaving the lens, image distance.
7:12 this is only so simple because you set up the V space as identity matrix! For anyone curious
This might actually help me with getting more familiar with analysis/calculus
what the hell just happened . is he a magician. why nobody told me this. i had like 20 math subjects
Differential Equations is my favorite. WOW! THANK YOU SO MUCH FOR THIS VIDEO SIR ❤🧠
Linear algebra is the most fluid and versatile metaphoric skeleton that u can use to solve real life scenarios
your videos are a joy to watch!
One of my professors in grad school---a famous numerical analyst---said that, with maybe a few exceptions like sorting, any applied problem that can't be turned into linear algebra can't be solved at all.
And now I have a simple way of deriving the sum of angles formulae for sin and cos!
12:00 Ha! This can be obtained from exp(tW)=I+tW+1/2!(tW)^2+... where I is the 2x2 identity matrix and W has first row (0 1) and second row (-1 0), by working out the matrix products of the expansion and identifying that exp(tW)=K(t) where K has first row (cos(t) sin(t)) and second row (-sin(t) cos(t)) from the Iwasawa decomposition, as in Proposition 2.2.5 of Bump's book Automorphic forms and representations.
Very fascinating. As usual a super interesting video!
That first example reminded me of cathegory theory some how. I bet there is a functor hiding in that situation, but I'm just a novice in CT
great video, loved the derivative example! (as a side note, i think your 2nd column for Node 2 in the last example should go {1 0 0 1 0})
As a researcher in quantum information, I am incapable of imagining what my world would look like without the wonders of linear algebra,.
9:10 no no no no! I'm in love after seeing this.
Many problems in physics can be studied with linear algebra. From the newtonian mechanics to the monsters called quantum field theory and relativity (special and general), linear algebra has been proved as a powerful tool to make predictions about the Nature, because have a great unification power. You can study coupled oscillatory linear systems and find their symmetries under linear transformations; at the end, this implies conserved quantities, according with the Noether's theorem.
My favorite math class in college, particularly because it was the easiest for me! Im a visual thinker and LN suited my brain.
I feel like the most intuitive way to think about linear algebra is LTI systems. Ie, amplifiers. Lets say you want to put a sound through an amplifier, and mix it with another one. Then it doent matter if you mix the two sounds before or after you put them through the amplifier. Thats all linearity means. Now lets say youre dr dre and want to pump the base. What is an arbitrary sound going to look like after being put through the amplifier? No idea. But each individual frequency is just going to be multiplied or phase shifted by some number. Therefore the frequencies are eigenvectors of the amplifier. Thats all eigenvector means. The eigenvalues are just the multiplications and phase shifts. So you can simplify the calculations of whats going to happen to a sound by transforming to a frequency basis and doing your calculations there. Thats all matrix diagonalisation is.
Another class of applications comes from algebraic topology. Algebraic topology uses linear algebra to study topological spaces.
The 2-node is messed up in the last example. 2 is not connected to 2, but to 1! So the upper left 2x2 square is not (1, 0; 0, 1), but (1, 1; 1, 0)...
BTW: It's interesting to calculate Eigenvalues and Eigenvectors of this matrix...
For example the 5th EW is λ_5 = 0 with EV v_5 = (0, -1, 1, 0, 1).
On the other hand, the first EW is the biggest EW λ_1 = (1+sqrt(13))/2 with EV v_1 = (λ_1, 2, 1, λ_1, 1).
The other Eigenvalues are λ_2 = (1+sqrt(5))/2, λ_3 = (1-sqrt(13))/2, and λ_4 = (1-sqrt(5))/2.
But it some cases, it may be better to ignore the reflexions, then λ_1 = - sqrt(3), λ_2 = sqrt(3), λ_3 = -1, λ_4 = 1, and λ_5 = 0.
This is like using Abstractions in Computer programming basically. Just representing and embedding computational algorithms as algebraic expressions
my fav: dx(t)/dt = A x(t) solution is x(t) = exp(At) x(0)
I’m enlightened. Thanks
Im not math professor, but I think trig is one of the most important concepts in mathematics (as a multivariable calculus student)
Thanks for this lecture. The calculus application was something I never learned about before. A real eye-opener, that.
In the first example, how do we know that the 4 trig-based functions actually span a 4 D space? What about linear independence?
In the graph theory application, it seems to me that vtx 1 is also connected to vtx 2, which you did not include in the matrix.
I think that you need to proof that the span is also a base; that means that you need to proof that those 4 functions are lineary independent. They are so this span is also a base.
I don’t think that these examples show us “unreasonable effectiveness”, there effectiveness is very reasonable. Spaces of smooth functions naturally have structure of vector spaces and linear differential equations by definition rise from linear operators on these spaces.
Same story with groups. On the one hand they have strong connections with rings (because there is construction of group ring, and group action of G ZG module structure) and so with modules over the rings (theory of modules of rings is generalization of linear algebra). On the other hand, vector spaces have natural action of automorphism group (also known as GL - general linear group) and for every group G we can find big enough space V and build faithful representation G -> GL(V) (“vectorification” of Cayley theorem for groups).
That’s why connections with group theory and theory of differential equations are not surprising
What is REALLY surprising, that linear algebra helps us to solve a lot of problems from discrete maths. For example weak Berge conjecture (graph is perfect complement of graph is perfect) has linear algebraical proof. Also spectral theory of graphs studies spectra of graph matrices (purely combinatorial construction, it’s hard to see algebraic meaning in it) and gives us results about (for example) regular graphs inner structure.
This is what we really can call “unreasonable effectiveness of linear algebra”.
Sorry for mistakes, English is not my native language
Unbelievable would make better sense here than unreasonable. Linear algebra is highly effective, so effective that at times it may be difficult to believe exactly how highly effective.
Hey great video! Would you mind sharing what brand of chalkboard you use in your videos?
I remember taking linear algebra in college and it opening up amazing possibilities in computer graphics but this is pretty dense. I'm saying this 4 minutes in and will continue watching to see if I get what he's saying before applying for a job with the sponsor 😅😂😂🤣
I definitely share this view with all my students; linear is a serious power-tool
My Differential Equations lecturer said that humans are so stupid that we are only capable of computing linear things. Out of all the possibilities, our brain only works for A(x+y)=A(x)+A(y), so we have developed our mathematics in the lines of the "developable", which is relating everything with linear algebra.
For answering "why is linear algebra so useful?", try to imagine anything not using linearity, and no progress will be achieved, so nobody would study it, nobody would care about it and we would forget about it. So, the usefulness of linear algebra is a kind of survival bias
We need a video about graphs, pls 🙏🏼
Fascinating insight. What do the eigenvectors mean in these situations?
Even after transformation , eigenvector remains in the same spot may be stretched
📝 Summary of Key Points:
📌 Linear algebra provides powerful tools for analyzing mathematical structures and gaining a deeper understanding of them. It allows for the translation of mathematical concepts into the language of linear algebra, enabling their study using linear algebra techniques.
🧐 In the first example, a four-dimensional real vector space spanned by four functions is examined. By representing this structure as a matrix, the derivative of the functions can be analyzed, and the matrix can be used to find the anti-derivative of a function.
🚀 The second example explores how a group can be represented in linear algebra using matrices. The group ZN is represented as 2x2 matrices with real entries, and addition in ZN is represented as matrix multiplication. This representation allows for the study of groups using linear algebra techniques.
📊 Linear algebra has applications in data science and machine learning. Data can be encoded into matrices, and matrix factorization techniques can be used to analyze the data. Examples include encoding images and representing networks or graphs using adjacency matrices.
💡 Additional Insights and Observations:
💬 "Linear algebra provides a powerful framework for studying mathematical structures and solving problems in various fields."
🌐 The video references the use of linear algebra in data science and machine learning, highlighting its practical applications in these areas.
📣 Concluding Remarks:
Linear algebra is a versatile and effective tool for studying mathematical structures and solving problems. By translating mathematical concepts into the language of linear algebra, we can gain a deeper understanding and apply powerful techniques. From analyzing derivatives and integrals to representing groups and encoding data, linear algebra plays a crucial role in various fields of study.
Generated using Talkbud (Browser Extension)
Bravo! ChatGPT -- using the video-transcript -- couldn't have produced a better summary.
17:50, your matrix is off, its saying that 2 connects to itself,when that 1 is supposed to be 1 slot higher.
So I had been recommended the study of LA *before* Calculus 3 so that an understanding of the chain rule as a simplified form of The Jacobian could be ascertained.
Claim that dim V = 4 at 3:15 is unjustified. Wrong adjecency matrix at 16:50.
Truly = I do not know.
I have much to learn
I do appreciate it's beauty
The graphical representation appears to be incorrect, but the video is fantastic!
the existence of basis of vector space is equivalent to the axiom of choice which seems to be unrelated to linear algebra.
If you ever browse over the "attention" paper on the transformers architecture in LLMs, the sentence about positional encoding that goes "... for any fixed offset k, PE_{pos+k} can be represented as a linear function of PE_{pos}" has some relation to the first application in this video.
One can represent the elements of the Galois group of an equation as matrices.
Great vid. Thanks.
Could you theoretically use any periodic function instead of sin and cos in the modulo sum example? My thinking is you need pure sine and cosine to get a steady tick around the circle, but with other periodic functions you could have some sort of interesting "weighting function" to the inputs. With sine and cosine, its pure and steady, but with something like a triangle wave or a compound sine wave, you could induce some very strange behavior.
Thank you so much 🙏🙏🙏🙏🙏🙏🙏🙏
Finding the integral by finding the inverse of the derivative matrix was kind of mind-blowing. I know your example was chosen to make it simple, but can this be generally applied as a way to compute integrals?
I remember being mind-blown just like you and wondering the same thing when I first saw this. I went down the rabbit hole of functional analysis and differential equations and I'm still digging. A neat way of thinking about this problem is to view it as solving the differential equation Df=g for the antiderivative f of the function g, and D is the derivative operator.
Since differentiation is linear, Df=g now looks like a linear algebra problem where D is a linear operator, and f and g are vectors. In fact, in the context of functional analysis, functions are vectors belonging to vector spaces of functions appropriately called function spaces.
If we can find a finite basis for the function space of f and g, then we can represent D as a matrix, g as a coordinate vector, and solve for f by matrix multiplying D inverse with g just like in the video.
In general, these function spaces could be infinite dimensional and there's not always a useful basis to represent them, but the field of functional analysis has classified many kinds of function spaces with a variety of useful bases for solving differential equations.
My fav is least square solutions.
Excellent video!
As someone who knows neither, should you study Linear Algebra first or Calculus first?
Great video!
thank you
Hi, how would you encode the +C ? You can point me to reference or calculate directly here, I will comprehend using notation. Also, never saw the integral vector matrix notation but follow you very easily. MS CS with BS CS (ML) BA Math (Stats). Wanted to take more real/complex/advanced matrix theory but ran out of time and had to get my career started before I hit 35. Love staying sharp with this content early mornings. TYVM❤
Brilliant! as always!
(Finite dimensional) linear algebra is the backbone of calculus. And calculus is everywhere, so linear algebra is everywhere.
In Soviet Russia, I forgot the name of the economic advisor but he used linear algebra to balance the productions from cities to maximize profits and minimize waste. Also I think about linear algebra all the time while I sit at strings of red lights in traffic every day going to class 🤬😡
That is really fucking interesting
A lot of math us about figuring out how to reduce a complicated problem to linear algebra
Shouldn't the first column in the graph matrix be (1,1,0,0,1)????
You love your linear algebra. I love your abstract algebra. Let's call the whole thing off.
Isn't there a mistake in the matrix at 14:20 and following. The matrix shows no connection between points 1 and 2 but a connection between 2 and itself.
8:46 Isn’t that the wrong sequence of Matrix and vector? Seems it should be (1,0,0,0)*D^-1