Martijn Anthonissen
Martijn Anthonissen
  • 137
  • 134 525
Animation of a solar concentrator
Made by Robert van Gestel during his PhD research at TU/e
Переглядів: 18

Відео

Animation of "bucket of water" problem
Переглядів 61Місяць тому
Animation by Robert van Gestel on his PhD work at TU/e
Animation of an optical fiber
Переглядів 32Місяць тому
Animation of an optical fiber
Mass-spring system
Переглядів 23Місяць тому
This animation shows a simulation of a horizontal mass-spring system
Finding a volume using polar coordinates
Переглядів 1,7 тис.2 роки тому
Worked out example
Finding the area of a region
Переглядів 1,3 тис.2 роки тому
Worked out example
Arc length
Переглядів 9272 роки тому
Worked out example
Directional derivative
Переглядів 1,2 тис.2 роки тому
Worked out example
Chain rule
Переглядів 9402 роки тому
Worked out example
Solving an initial value problem
Переглядів 1,1 тис.2 роки тому
Worked out example
Tangent plane to the graph of a function of two variables
Переглядів 1,2 тис.2 роки тому
Worked out example
Domain of a function of two variables
Переглядів 8392 роки тому
Worked out example
lecture 6 part 5
Переглядів 5172 роки тому
lecture 6 part 5
lecture 6 part 4
Переглядів 5452 роки тому
We consider the convergence of the basic iterative methods
Part 3
Переглядів 5832 роки тому
Basic iterative methods: - Jacobi - Gauss-Seidel - SOR
Part 2
Переглядів 5402 роки тому
Part 2
Iterative methods for linear systems, Part 1
Переглядів 8922 роки тому
Iterative methods for linear systems, Part 1
Double integral in polar coordinates
Переглядів 1,3 тис.2 роки тому
Double integral in polar coordinates
Mass of a flat plate
Переглядів 1,5 тис.2 роки тому
Mass of a flat plate
Length of a curve
Переглядів 1,1 тис.2 роки тому
Length of a curve
Area of a surface
Переглядів 9492 роки тому
Area of a surface
Solving an ordinary differential equation
Переглядів 8942 роки тому
Solving an ordinary differential equation
Partial derivatives
Переглядів 8342 роки тому
Partial derivatives
Intersection points of a plane with the coordinate axes
Переглядів 1,3 тис.2 роки тому
Intersection points of a plane with the coordinate axes
Domain of a function of two variables
Переглядів 9532 роки тому
Domain of a function of two variables
7-4 Differential equations
Переглядів 4003 роки тому
7-4 Differential equations
7-3 Improper integrals
Переглядів 3233 роки тому
7-3 Improper integrals
7-2 Surface area
Переглядів 2103 роки тому
7-2 Surface area
7-1 Length of a curve
Переглядів 3063 роки тому
7-1 Length of a curve
6-4 Integration by substitution
Переглядів 3643 роки тому
6-4 Integration by substitution

КОМЕНТАРІ

  • @TheClockmister
    @TheClockmister Місяць тому

    nou joe aur eej ferrie choet tietcher! tenk joe ferrie meutch!!

  • @TanishqDass-y3b
    @TanishqDass-y3b Місяць тому

    give examples to explain the topic. You're just reading the slides. Didn't understand a single thing. Same shit is written in my notes, but how do I actually implement it?

  • @down3879
    @down3879 Місяць тому

    Still helping students in 2024, here from Romania, keep up with these kind of videos!

  • @ghostcookie882
    @ghostcookie882 2 місяці тому

    This is insanely good

  • @BikashBayan-x9u
    @BikashBayan-x9u 2 місяці тому

    Hi sir, could you give me any link for the scientific computation course which discusses about advanced linear system solvers? Thank you.

    • @martijnanthonissen
      @martijnanthonissen 2 місяці тому

      Thanks for your interest! I'm afraid that course is not publicly available on UA-cam. You're welcome to come to Eindhoven naturally 🙂

  • @earlducaine1085
    @earlducaine1085 2 місяці тому

    You're always apologizing for the proofs, but they're the best part!

    • @martijnanthonissen
      @martijnanthonissen 2 місяці тому

      @@earlducaine1085 Glad you enjoy the proofs too! Thanks for the feedback!

  • @bouenmarjan
    @bouenmarjan 3 місяці тому

    Very clear explanation. This is good stuff. I have 2 questions: 1) Why does Ai stay upper Hessenberg? 2) I don't really grasp why the diagonal elements of Ai converge to the eigenvalues? Thanks for the video!

    • @martijnanthonissen
      @martijnanthonissen 2 місяці тому

      Thanks for the feedback! Great questions - do you have access to the book "Scientific Computing" by Gander, Gander and Kwok? Your questions are answered in Sections 7.6.4 and 7.6.7

  • @holyshit922
    @holyshit922 3 місяці тому

    Reduction interval [a;b] to the [-1;1] interval can be done by substitution but the biggest problem with Gauss-Legendre quadratures is calculation of nodes I would like to see comparison between Gauss-Chebyshev quadratures Chebyshev nodes are easy to compute while Legendre nodes needs numerical methods (I can get coefficients of Legendre polynomial of degree n in linear time from power series solution of differential equation but to calculate nodes I need numerical methods such as QR method for eigenvalues)

  • @sanchitagarwal8764
    @sanchitagarwal8764 4 місяці тому

    Could we use deflation as well to compute the eigenvalues ?

    • @martijnanthonissen
      @martijnanthonissen 3 місяці тому

      @@sanchitagarwal8764 Yes, indeed. You can combine it with deflation. This is discussed in, e.g., the book by Gander, Gander & Kwok on Scientific Computing

    • @sanchitagarwal8764
      @sanchitagarwal8764 3 місяці тому

      @@martijnanthonissen I think I finally got it, Francis step can be applied until the m, m-1 element is larger than machine epsilon and.then deflation can be done

  • @sanchitagarwal8764
    @sanchitagarwal8764 4 місяці тому

    Great content Professor

  • @phymadori545
    @phymadori545 4 місяці тому

    You've become my professor of linear algebra. Thank you very much.

  • @phymadori545
    @phymadori545 4 місяці тому

    Super. Thanks.

  • @anthonykonstantinou5378
    @anthonykonstantinou5378 4 місяці тому

    you explain everything in an extremely clear manner, thanks a lot

    • @martijnanthonissen
      @martijnanthonissen 4 місяці тому

      @@anthonykonstantinou5378 Great to hear that. Thanks for the feedback!

  • @advancedappliedandpuremath
    @advancedappliedandpuremath 5 місяців тому

    Hi Sir this video is exceptional it summarizes all of SVD. I have just a one question can we find AA^T first which will give U matrix and then funding V matrix using relation U(sigma) = AV, i mean the reverse process of what is usually found in books

    • @martijnanthonissen
      @martijnanthonissen 5 місяців тому

      Thanks! I'm not sure the reverse process will work because we define the columns of U in terms of the columns of V

    • @advancedappliedandpuremath
      @advancedappliedandpuremath 5 місяців тому

      @@martijnanthonissen Sir is it necessary to arrange singular values in descending order what if we don't.

    • @martijnanthonissen
      @martijnanthonissen 5 місяців тому

      @@advancedappliedandpuremath You can find a factorization where the singular values are not sorted. However, the singular values are a weight for the importance of the singular vectors. For applications you usually approximate the matrix using only a few singular vectors and then it is useful to sort them by importance

    • @advancedappliedandpuremath
      @advancedappliedandpuremath 5 місяців тому

      @@martijnanthonissen Great Sir thanks a lot.

  • @martijnanthonissen
    @martijnanthonissen 5 місяців тому

    Thanks! What do you mean by "power of matrices property"?

    • @karimsayed4889
      @karimsayed4889 5 місяців тому

      In 3:57, the property entitled Lemma:Power of matrices. You mentioned that it was explained previously in one of these videos.

    • @martijnanthonissen
      @martijnanthonissen 5 місяців тому

      @@karimsayed4889 It is in video 1-3 of the NLA playlist. Here's a link ua-cam.com/video/utLFuFLZOFk/v-deo.htmlsi=1k2EZr0Wrv_ASTdj

  • @karimsayed4889
    @karimsayed4889 5 місяців тому

    Great video. Where do you explain the power of matrices property?

  • @kaanyucel6294
    @kaanyucel6294 6 місяців тому

    how to choose sigmas when shifting not knowing the eig values

    • @martijnanthonissen
      @martijnanthonissen 6 місяців тому

      That is indeed a problem. You can estimate eigenvalues (using e.g. Gershgorin's theorem)

  • @aryanshrajsaxena6961
    @aryanshrajsaxena6961 6 місяців тому

    Hi. I am Chong Li from Bloodsport. Love your lectures sir!

  • @fazgamerx
    @fazgamerx 6 місяців тому

    Wow!

  • @AM-jx3zf
    @AM-jx3zf 6 місяців тому

    Thank you, dear sir. Very simple and easy to understand. I was a math undergrad, and I am going back for my master's after 10 years, so this is a huge help for revision.

  • @hectortilla_francesa
    @hectortilla_francesa 6 місяців тому

    This has been a perfect explanation about the householder's QR decomposition! Really grateful for your video!!

  • @wilsonailen5015
    @wilsonailen5015 7 місяців тому

    please, can you send me a link to find the gradient(derivative) of norms? thank you

    • @martijnanthonissen
      @martijnanthonissen 7 місяців тому

      You probably need the derivative of the 2-norm? I think you can find that yourself! Just try differentiating its definition in components

  • @NYkid3099
    @NYkid3099 8 місяців тому

    Thank you. Extremely helpful!

  • @RobinWu-t9n
    @RobinWu-t9n 8 місяців тому

    Greatly help with my understanding in concepts and algorithms!!

  • @Schooling2023
    @Schooling2023 8 місяців тому

    this is so great! Thank's

  • @ananthakrishnank3208
    @ananthakrishnank3208 8 місяців тому

    Wonderful lecture. Missing such mathematics for a long time. Thanks!

  • @AMRAbdellatif-sj3dg
    @AMRAbdellatif-sj3dg 9 місяців тому

    i really like your videos but can you explain how to get H2 and H1 simply

  • @abdulazizalhaidari7665
    @abdulazizalhaidari7665 10 місяців тому

    Each video of these series is great, thanks from the bottoms of my heart,

    • @abdulazizalhaidari7665
      @abdulazizalhaidari7665 10 місяців тому

      though, I wish if there were more detailed underlying technical details as you did with some other lectures (i.e. the intuition on why this works).

  • @abdulazizalhaidari7665
    @abdulazizalhaidari7665 10 місяців тому

    Hi Prof Martijn, I always review your videos for Numerical Linear algebra, your explanation is just make sense and intuitive. Thank you.

  • @deepalikulal88
    @deepalikulal88 10 місяців тому

    Hello professor, may I know which book you have taken for reference, thank you 😊

    • @martijnanthonissen
      @martijnanthonissen 10 місяців тому

      There are a couple of books I like on the topic. Each one of these is a great resource: - Michael T. Heath, Scientific Computing. An introductory survey. Mc Graw Hill - Walter Gander, Martin J. Gander, Felix Kwok, Scientific Computing --- An Introduction using Maple and MATLAB. Springer, 2014 - Richard L. Burden, J. Douglas Faires and Annette M. Burden, Numerical Analysis 10th edition. Cengage Learning, 2016

  • @ashar4121
    @ashar4121 10 місяців тому

    Watching from Zürich, this is so great! You should have more subscribers

  • @abdulazizalhaidari7665
    @abdulazizalhaidari7665 10 місяців тому

    you are a bless Dr Martijn. Deep appreciation for your videos, you are a great lecturer

  • @enside8822
    @enside8822 10 місяців тому

    I absolutely loved your explanation for choosing the sign at 23:25, I couldn't find anywhere else on the internet whether we are supposed to use +, - or signum and why, thank you a lot professor.

    • @martijnanthonissen
      @martijnanthonissen 10 місяців тому

      You are most welcome. Thanks for your nice comment!

  • @abdulwasayikhlaq8013
    @abdulwasayikhlaq8013 11 місяців тому

    Excellent explanation!

  • @dothuong-f1e
    @dothuong-f1e 11 місяців тому

    σ is the smallest singular value of A∗A, µi are the singular values of A, and we have used the fact that A∗A is normal. Why norm of (A*A)^-1=(norm A^+)^2 Can you explain for me ? Thank you so much

    • @martijnanthonissen
      @martijnanthonissen 11 місяців тому

      I am afraid that I do not understand what you are asking. Could you elaborate please?

  • @CGMossa
    @CGMossa 11 місяців тому

    I've seen these videos and frankly they are some of the best on the web, in terms of clarity and information. Love it.

  • @martijnanthonissen
    @martijnanthonissen 11 місяців тому

    Check Video 2-3 in this series. It discusses sensitivity of linear systems

  • @dothuong-f1e
    @dothuong-f1e 11 місяців тому

    Can you help me explain sensitivity of variable A

  • @holyshit922
    @holyshit922 11 місяців тому

    One important thing : Cross product is a vector Americans misuse the terms cross product and its length For example in Convex hull they use term cross product no matter that cross product works only for 3D vectors and they have the length of cross product of their mind

  • @rainbowreviews8644
    @rainbowreviews8644 11 місяців тому

    Thank you so much for these lectures ❤

  • @ashutoshtrivedi3960
    @ashutoshtrivedi3960 11 місяців тому

    can you provide code for this

  • @artemandrienko5165
    @artemandrienko5165 11 місяців тому

    Dear Professor, thank you very much for the explanation! How could I deal with complex matrices? Can I use QR/Schur for the complex case? As soon as I understood, you derived the explanation for the real values.

    • @martijnanthonissen
      @martijnanthonissen 11 місяців тому

      Indeed, the video is for real matrices. The decomposition exists for complex matrices too. You can look on Wikipedia to see how that works. Good luck!

    • @artemandrienko5165
      @artemandrienko5165 11 місяців тому

      Thank you very much, Professor! @@martijnanthonissen

  • @holyshit922
    @holyshit922 Рік тому

    And suppose I would like to get some orthogonal polynomials via orthogonalisation How Householder transformation would help ? I know how to use modified Gram-Schmidt for this purpose We have inner product than v_{j}^{T}v_{k} For example inner product for Chebyshov (sh is hard and over that e there are two dots which Russians usually dont write so it is better to write o here) polynomial is sum(a_{2k}*1/(2^(2k)*binomial(2k,k),k=0..floor((m+n)/2)) where p(x)q(x) = sum(a_{k}*x^k,k=0..m+n) so this inner product is different from that produced by v_{j}^{T}v_{k}

    • @martijnanthonissen
      @martijnanthonissen Рік тому

      I do not have a direct answer to your question, but the video ua-cam.com/video/OGRuR2uOWUQ/v-deo.htmlsi=O36_V4M2V25w5vzU covers Gram-Schmidt to factor a matrix. You may also use Householder to get such a factorization

  • @abdelrahmanibrahim1981
    @abdelrahmanibrahim1981 Рік тому

    well done appreciate your effort

  • @davidstalmarck8676
    @davidstalmarck8676 Рік тому

    This series has been awesome. Thank you so much for publishing these lectures!

  • @mingusbingus6746
    @mingusbingus6746 Рік тому

    Good stuff-the Dutch was a little confusing but luckily math is universal

  • @davidstalmarck8676
    @davidstalmarck8676 Рік тому

    Thank you!❤

  • @RJone89
    @RJone89 Рік тому

    Absolutely the best explanation -- thank you tremendously!

  • @mxyptlkk
    @mxyptlkk Рік тому

    Thank you professor

  • @krishnasuseel825
    @krishnasuseel825 Рік тому

    Thank you so much sir