- 137
- 134 525
Martijn Anthonissen
Netherlands
Приєднався 15 гру 2006
I am a mathematician and work at Eindhoven University of Technology (TU/e) in the Netherlands.
Many things in the world around us can be described with mathematical models. These equations are usually too difficult to solve exactly, but it is possible to solve them numerically. This field is called numerical mathematics or scientific computing. Computer simulations offer a new way to approach science in addition to theory and experiments.
I work in the Computational Illumination Optics group at TU/e. The basic goal in illumination optics is to design an optical system that turns a given light source and into a desired light output. Typical applications are LED lighting, road lights and car headlights.
Many things in the world around us can be described with mathematical models. These equations are usually too difficult to solve exactly, but it is possible to solve them numerically. This field is called numerical mathematics or scientific computing. Computer simulations offer a new way to approach science in addition to theory and experiments.
I work in the Computational Illumination Optics group at TU/e. The basic goal in illumination optics is to design an optical system that turns a given light source and into a desired light output. Typical applications are LED lighting, road lights and car headlights.
Animation of a solar concentrator
Made by Robert van Gestel during his PhD research at TU/e
Переглядів: 18
Відео
Animation of "bucket of water" problem
Переглядів 61Місяць тому
Animation by Robert van Gestel on his PhD work at TU/e
Mass-spring system
Переглядів 23Місяць тому
This animation shows a simulation of a horizontal mass-spring system
Tangent plane to the graph of a function of two variables
Переглядів 1,2 тис.2 роки тому
Worked out example
Iterative methods for linear systems, Part 1
Переглядів 8922 роки тому
Iterative methods for linear systems, Part 1
Double integral in polar coordinates
Переглядів 1,3 тис.2 роки тому
Double integral in polar coordinates
Solving an ordinary differential equation
Переглядів 8942 роки тому
Solving an ordinary differential equation
Intersection points of a plane with the coordinate axes
Переглядів 1,3 тис.2 роки тому
Intersection points of a plane with the coordinate axes
nou joe aur eej ferrie choet tietcher! tenk joe ferrie meutch!!
give examples to explain the topic. You're just reading the slides. Didn't understand a single thing. Same shit is written in my notes, but how do I actually implement it?
Still helping students in 2024, here from Romania, keep up with these kind of videos!
This is insanely good
@@ghostcookie882 Glad you like it! Thanks
Hi sir, could you give me any link for the scientific computation course which discusses about advanced linear system solvers? Thank you.
Thanks for your interest! I'm afraid that course is not publicly available on UA-cam. You're welcome to come to Eindhoven naturally 🙂
You're always apologizing for the proofs, but they're the best part!
@@earlducaine1085 Glad you enjoy the proofs too! Thanks for the feedback!
Very clear explanation. This is good stuff. I have 2 questions: 1) Why does Ai stay upper Hessenberg? 2) I don't really grasp why the diagonal elements of Ai converge to the eigenvalues? Thanks for the video!
Thanks for the feedback! Great questions - do you have access to the book "Scientific Computing" by Gander, Gander and Kwok? Your questions are answered in Sections 7.6.4 and 7.6.7
Reduction interval [a;b] to the [-1;1] interval can be done by substitution but the biggest problem with Gauss-Legendre quadratures is calculation of nodes I would like to see comparison between Gauss-Chebyshev quadratures Chebyshev nodes are easy to compute while Legendre nodes needs numerical methods (I can get coefficients of Legendre polynomial of degree n in linear time from power series solution of differential equation but to calculate nodes I need numerical methods such as QR method for eigenvalues)
Could we use deflation as well to compute the eigenvalues ?
@@sanchitagarwal8764 Yes, indeed. You can combine it with deflation. This is discussed in, e.g., the book by Gander, Gander & Kwok on Scientific Computing
@@martijnanthonissen I think I finally got it, Francis step can be applied until the m, m-1 element is larger than machine epsilon and.then deflation can be done
Great content Professor
You've become my professor of linear algebra. Thank you very much.
Super. Thanks.
you explain everything in an extremely clear manner, thanks a lot
@@anthonykonstantinou5378 Great to hear that. Thanks for the feedback!
Hi Sir this video is exceptional it summarizes all of SVD. I have just a one question can we find AA^T first which will give U matrix and then funding V matrix using relation U(sigma) = AV, i mean the reverse process of what is usually found in books
Thanks! I'm not sure the reverse process will work because we define the columns of U in terms of the columns of V
@@martijnanthonissen Sir is it necessary to arrange singular values in descending order what if we don't.
@@advancedappliedandpuremath You can find a factorization where the singular values are not sorted. However, the singular values are a weight for the importance of the singular vectors. For applications you usually approximate the matrix using only a few singular vectors and then it is useful to sort them by importance
@@martijnanthonissen Great Sir thanks a lot.
Thanks! What do you mean by "power of matrices property"?
In 3:57, the property entitled Lemma:Power of matrices. You mentioned that it was explained previously in one of these videos.
@@karimsayed4889 It is in video 1-3 of the NLA playlist. Here's a link ua-cam.com/video/utLFuFLZOFk/v-deo.htmlsi=1k2EZr0Wrv_ASTdj
Great video. Where do you explain the power of matrices property?
how to choose sigmas when shifting not knowing the eig values
That is indeed a problem. You can estimate eigenvalues (using e.g. Gershgorin's theorem)
Hi. I am Chong Li from Bloodsport. Love your lectures sir!
Great to hear that, thank you!
Wow!
Thank you, dear sir. Very simple and easy to understand. I was a math undergrad, and I am going back for my master's after 10 years, so this is a huge help for revision.
This has been a perfect explanation about the householder's QR decomposition! Really grateful for your video!!
please, can you send me a link to find the gradient(derivative) of norms? thank you
You probably need the derivative of the 2-norm? I think you can find that yourself! Just try differentiating its definition in components
Thank you. Extremely helpful!
Greatly help with my understanding in concepts and algorithms!!
this is so great! Thank's
Wonderful lecture. Missing such mathematics for a long time. Thanks!
i really like your videos but can you explain how to get H2 and H1 simply
Each video of these series is great, thanks from the bottoms of my heart,
though, I wish if there were more detailed underlying technical details as you did with some other lectures (i.e. the intuition on why this works).
Hi Prof Martijn, I always review your videos for Numerical Linear algebra, your explanation is just make sense and intuitive. Thank you.
Hello professor, may I know which book you have taken for reference, thank you 😊
There are a couple of books I like on the topic. Each one of these is a great resource: - Michael T. Heath, Scientific Computing. An introductory survey. Mc Graw Hill - Walter Gander, Martin J. Gander, Felix Kwok, Scientific Computing --- An Introduction using Maple and MATLAB. Springer, 2014 - Richard L. Burden, J. Douglas Faires and Annette M. Burden, Numerical Analysis 10th edition. Cengage Learning, 2016
Watching from Zürich, this is so great! You should have more subscribers
you are a bless Dr Martijn. Deep appreciation for your videos, you are a great lecturer
I absolutely loved your explanation for choosing the sign at 23:25, I couldn't find anywhere else on the internet whether we are supposed to use +, - or signum and why, thank you a lot professor.
You are most welcome. Thanks for your nice comment!
Excellent explanation!
σ is the smallest singular value of A∗A, µi are the singular values of A, and we have used the fact that A∗A is normal. Why norm of (A*A)^-1=(norm A^+)^2 Can you explain for me ? Thank you so much
I am afraid that I do not understand what you are asking. Could you elaborate please?
I've seen these videos and frankly they are some of the best on the web, in terms of clarity and information. Love it.
Thanks! Great to hear you like my videos!
Check Video 2-3 in this series. It discusses sensitivity of linear systems
Can you help me explain sensitivity of variable A
One important thing : Cross product is a vector Americans misuse the terms cross product and its length For example in Convex hull they use term cross product no matter that cross product works only for 3D vectors and they have the length of cross product of their mind
Thank you so much for these lectures ❤
can you provide code for this
Dear Professor, thank you very much for the explanation! How could I deal with complex matrices? Can I use QR/Schur for the complex case? As soon as I understood, you derived the explanation for the real values.
Indeed, the video is for real matrices. The decomposition exists for complex matrices too. You can look on Wikipedia to see how that works. Good luck!
Thank you very much, Professor! @@martijnanthonissen
And suppose I would like to get some orthogonal polynomials via orthogonalisation How Householder transformation would help ? I know how to use modified Gram-Schmidt for this purpose We have inner product than v_{j}^{T}v_{k} For example inner product for Chebyshov (sh is hard and over that e there are two dots which Russians usually dont write so it is better to write o here) polynomial is sum(a_{2k}*1/(2^(2k)*binomial(2k,k),k=0..floor((m+n)/2)) where p(x)q(x) = sum(a_{k}*x^k,k=0..m+n) so this inner product is different from that produced by v_{j}^{T}v_{k}
I do not have a direct answer to your question, but the video ua-cam.com/video/OGRuR2uOWUQ/v-deo.htmlsi=O36_V4M2V25w5vzU covers Gram-Schmidt to factor a matrix. You may also use Householder to get such a factorization
well done appreciate your effort
This series has been awesome. Thank you so much for publishing these lectures!
Great to hear that. Thanks!
Good stuff-the Dutch was a little confusing but luckily math is universal
I do have videos in English on my channel...
Thank you!❤
Absolutely the best explanation -- thank you tremendously!
Thank you professor
Thank you so much sir