Why does this channel have 4 thousand subscribers and not 4 million, it would be truly deserved Another awesome video tho, and looking forward for 3rd part!
this video is so much underated, this is the clearest explanation one can find on the internet about semidefinite programming without having to study control theory or Mr Boyd convex optimization course. The animation of the feasible region as a 3D object completely blow my mind yet it does send a powerful message regarding the beatiful nature of semidefinite programming. Very difficult to express appreciation with my limited vocabulary
really like these visualizations! I'm just now getting into SDPs for computer vision, would love to see a video on how different SDP solvers work internally. That's pretty much a mystery to me, and it's hard to find good explanations
I love these videos, but I feel like the pacing is a bit fast. If you slowed down the explanations in the video by 25% that would be very helpful to me. Keep up the great work!
When writing A=C^TC isn't C more akin to the Cholesky decomposition and a square root of A would be B=A^(1/2)? I know this notation is used sometimes, but it seems confusing to refer to C as a square root since you would think you could just square it too get back A, but you actually have to transpose one of the C. In particular, sqrtm from scipy uses the more traditional definition of a square root.
Yes, that is indeed correct. A matrix B is a square root of a matrix A if B^2 = A. Thus, if we consider the spectral decomposition of A = V diag(lambda_1, ... , lambda_n) V^T, and set B = V diag(sqrt(lambda_1), ... , sqrt(lambda_n)) V*, we get what we wanted. Notice also that B is a symmetric matrix, so B^2 = B^T B = A
Hey! So I know that for an nxn matrix, the min alpha is -1/(n-1). How would I go about proving this? I can’t think of a general system of equations for the eigenvalues
@@iamnottellingumyname That's a fun problem! Here is an elegant solution that doesn't require too much computation. Let M be the matrix with ones on the diagonal, and alpha on on the off diagonal. We want the smallest alpha s.t. M stays psd. We can write M = alpha * J + (1-alpha)* I, where J is the all one matrix and I is the identity matrix. It's not hard to show J has rank one, with eigenvalues (n, 0, ..., 0). This means that the eigenvalues of alpha*J are (n*alpha, 0, ..., 0). Adding (1-alpha)*I shifts all the eigenvalues by (1-alpha), so the eigenvalues of M are (n*alph+(1-alpha), 1-alpha, ..., 1-alpha). From there you can easily show that min alpha is -1/(n-1).
If you mean solving NP-hard problems, I am afraid even the almighty python cannot do that ... If you mean getting good-enough solutions for NP-hard problems, then I have a feeling that you might like the next video ;)
Why does this channel have 4 thousand subscribers and not 4 million, it would be truly deserved
Another awesome video tho, and looking forward for 3rd part!
Yayy!! Thanks for the support, I really appreciate it!
this video is so much underated, this is the clearest explanation one can find on the internet about semidefinite programming without having to study control theory or Mr Boyd convex optimization course. The animation of the feasible region as a 3D object completely blow my mind yet it does send a powerful message regarding the beatiful nature of semidefinite programming. Very difficult to express appreciation with my limited vocabulary
I really really appreciated your nice comment, and I am glad you liked the video!!
If you align the first vector with [1, 0, 0], you see the 3 vectors are on xy plane, separated by 120 degrees.
really like these visualizations! I'm just now getting into SDPs for computer vision, would love to see a video on how different SDP solvers work internally. That's pretty much a mystery to me, and it's hard to find good explanations
I love these videos, but I feel like the pacing is a bit fast. If you slowed down the explanations in the video by 25% that would be very helpful to me. Keep up the great work!
Fair enough, thanks for the valuable feedback!
Truly awesome Bashir! Subscribed for more content !!
I've never heard of this...great explanation. thanks very much!
You're very welcome! :-)
As always, what an extraordinary content, Bakhir :)
Thanks a ton!!!
Awesome content! Thank you🎉
Great!!! Ive had a trouble of getting visual intuituon of SDP and it helps a lot!!
Fantastic! Great to hear!
When writing A=C^TC isn't C more akin to the Cholesky decomposition and a square root of A would be B=A^(1/2)? I know this notation is used sometimes, but it seems confusing to refer to C as a square root since you would think you could just square it too get back A, but you actually have to transpose one of the C. In particular, sqrtm from scipy uses the more traditional definition of a square root.
You are right! This is not the actual definition of square root.
Yes, that is indeed correct. A matrix B is a square root of a matrix A if B^2 = A. Thus, if we consider the spectral decomposition of A = V diag(lambda_1, ... , lambda_n) V^T, and set B = V diag(sqrt(lambda_1), ... , sqrt(lambda_n)) V*, we get what we wanted. Notice also that B is a symmetric matrix, so B^2 = B^T B = A
Great video. 👍
love these, do you accept donations?
I don’t take donations, but I accept nice comments. Thank you!
Hey! So I know that for an nxn matrix, the min alpha is -1/(n-1). How would I go about proving this? I can’t think of a general system of equations for the eigenvalues
Absolutely beautiful videos by the way!
@@iamnottellingumyname That's a fun problem! Here is an elegant solution that doesn't require too much computation. Let M be the matrix with ones on the diagonal, and alpha on on the off diagonal. We want the smallest alpha s.t. M stays psd.
We can write M = alpha * J + (1-alpha)* I, where J is the all one matrix and I is the identity matrix. It's not hard to show J has rank one, with eigenvalues (n, 0, ..., 0). This means that the eigenvalues of alpha*J are (n*alpha, 0, ..., 0). Adding (1-alpha)*I shifts all the eigenvalues by (1-alpha), so the eigenvalues of M are (n*alph+(1-alpha), 1-alpha, ..., 1-alpha). From there you can easily show that min alpha is -1/(n-1).
@@VisuallyExplained that’s a really beautiful solution!! Appreciate the reply, keep up the good work
Thank you
I noticed that u used python in many of the videos u posted.Can u recommend me a book that can help to quickly learn how to solve NP in python.
If you mean solving NP-hard problems, I am afraid even the almighty python cannot do that ... If you mean getting good-enough solutions for NP-hard problems, then I have a feeling that you might like the next video ;)
is there a mistake?
壓力大