Next time please also discuss numerical stability of the algorithm and rounding-error-creep. Those are important considerations when doing numerical computations using floating point numbers.
But note that in practice you almost never want to compute the inverse of a matrix explicitly. This is one of the important lessons of numerical linear algebra. For example, when solving Ax = b, you'd typically compute the LU factorization of A (or perhaps the QR factorization of A) rather than explicitly computing the inverse of A. When changing basis from the standard basis to the basis consisting of the columns of a matrix Q, we are given a vector x and we need to compute y = Q^{-1} x. But the best way to do that is to solve the system Qy = x. To solve that system, you would not compute the inverse of Q explicitly.
Wanted to ask how was your journey with Open University. I'm planning to do a bsc mathematics and statistics honours :) for data science. Would love your opinion if it's a good choice :)
This type of short video about 1-2 topics is really helpful. Easy to find the info needed. Thanks!
Next time please also discuss numerical stability of the algorithm and rounding-error-creep. Those are important considerations when doing numerical computations using floating point numbers.
Totally agree, especially in what I do in trading.
Clear and straight to the point. Thank you!
But note that in practice you almost never want to compute the inverse of a matrix explicitly. This is one of the important lessons of numerical linear algebra. For example, when solving Ax = b, you'd typically compute the LU factorization of A (or perhaps the QR factorization of A) rather than explicitly computing the inverse of A. When changing basis from the standard basis to the basis consisting of the columns of a matrix Q, we are given a vector x and we need to compute y = Q^{-1} x. But the best way to do that is to solve the system Qy = x. To solve that system, you would not compute the inverse of Q explicitly.
Very usefull beacuse im taking linera algebra in university and im looking for tools to help me out in everytnhing im learning
Why are the off diagonals non-zero of np.matmul(M,Minv)? Is this due to the underlying precision of the C code running under the hood of numpy?
I don't think it's C related. It is the fact the MCU will be operating with 32-bit or 64-bit resolution.
The matrix you used is the stiffness matrix of 4 springs connected in serries with stiffness of one
I believe M @ Minv and M.dot(Minv) would also do the same matrix multiplication operation with 2D numpy arrays.
I was there when it was written
is there a way to vectorize the inverse operation if we have two or more square matrices?
Please share the vendor name for the desk
is there a way to represent that matrix in 2-decimal format?
Wanted to ask how was your journey with Open University. I'm planning to do a bsc mathematics and statistics honours :) for data science. Would love your opinion if it's a good choice :)
I choked on my breakfast in the intro lol
what about coding it from scratch?
A multiplicação da matriz inversa pela matriz não deveria de resultar na matriz identidade? E não numa triangular superior...
Wait, 0 views 35 likes
pog
Also 45 likes 😂😂
Lol 1view and 5comments