Lecture 9 (CEM) -- Finite-Difference Method

Поділитися
Вставка
  • Опубліковано 17 січ 2025

КОМЕНТАРІ • 79

  • @flivon
    @flivon 9 років тому +30

    Thank you for sharing knowledge in the internet.
    People like you are the ones who are changing the world for the better!
    Best regards from Brazil.

  • @JordanEdmundsEECS
    @JordanEdmundsEECS 5 років тому +4

    Some of the most beautiful math I have seen in a while. This just made the connection between computation and the analytic world beautifully.

  • @oscarnieves5083
    @oscarnieves5083 10 років тому +11

    After months of skimming through books and other resources, I stumbled upon this little video. Nothing until this point had made sense to me about FDM or how to actually use it (most books tell you a bunch of equations and formulas but never actually show you a step by step implementation of the method to solve real problems). This video is extremely useful and clear, and also gives some really good tips and shortcuts to applying FDM without having to do tedious calculations and algebraic substitutions. Excellent work.

  • @tilkesh
    @tilkesh 4 місяці тому +1

    Thank you very much. For many of us CEM=Prof.Raymond lecture. This lecture series is historic and already passed the test of time.

  • @amiralizadeh6621
    @amiralizadeh6621 2 роки тому +2

    that's exactly what I wanted! a simple tutorial that leaves the more complicated subjects to the reader...

    • @empossible1577
      @empossible1577  2 роки тому

      Thank you!!
      If you are looking for more detail and examples for the finite-difference method, you might like Topics 6 and 7 here:
      empossible.net/academics/emp4301_5301/

  • @ferrabras
    @ferrabras 6 років тому +2

    I keep refering to you videos to my radar students that think every target is made by a set of point independent targets! Thank you for you videos!

    • @empossible1577
      @empossible1577  6 років тому

      That is awesome! I am very glad these are helping your students. I am curious which specific lectures you find most helpful for radar.

  • @SzTz100
    @SzTz100 Рік тому +1

    This is so so good, better than anything else on youtube.

    • @empossible1577
      @empossible1577  Рік тому +1

      Thank you! This is great to hear! This particular video is quite old. I have improved and expanded on this topic. You may enjoy seeing these new materials. You can find them under Topics 6 and 7 in my Computational Methods course here:
      empossible.net/academics/emp4301_5301/
      Hope this helps!

  • @Bambabah
    @Bambabah 11 років тому +1

    Best explanation of the topic, not too simplistic, not too complicated.

  • @johngarcia5695
    @johngarcia5695 6 років тому +1

    Awesome intro for FDM. Just what I was looking for! Thank you for sharing.

    • @empossible1577
      @empossible1577  6 років тому

      Thank you! I have more recently developed Lectures 6b, 6c and Topic 7 here:
      emlab.utep.edu/ee4386_5301_CompMethEE.htm

  • @nth2tell
    @nth2tell 8 років тому +6

    Darn... I didn't know the finite difference can be this simple. Very well presented.

    • @empossible1577
      @empossible1577  8 років тому +1

      With just a little bit of practice, it is very easy. If you get experienced with the matrix operators in this lecture, you can solve entirely new differential equations in just minutes.

  • @lionelwilfriedwandji2042
    @lionelwilfriedwandji2042 10 років тому +2

    omg all was so clearly explained !! I understand the stuff now, a myriad of thanks !

  • @mhany62
    @mhany62 10 років тому +1

    Great work

  • @Wofakay31
    @Wofakay31 9 років тому +1

    Really cool videos

  • @shibby4555
    @shibby4555 10 років тому +1

    Thanks so much for this. Superb clarity and concise at every step.

  • @husnainhyder6713
    @husnainhyder6713 4 роки тому +2

    Dear Sir
    I Got to say you Rock
    The way you address your lecture it sounds like James Woods is addressing the Lecture
    You Rules.
    By the Way
    Thanks alot

  • @calebyenusah3174
    @calebyenusah3174 6 років тому +1

    Pure Gold! Thank you!!

    • @empossible1577
      @empossible1577  6 років тому

      Thank you!! You may be interested that I broke this one lecture into a series of lectures in my Computational Methods for EE course. See Topics 6b, 6c, and 7a - 7d here:
      emlab.utep.edu/ee4386_5301_CompMethEE.htm
      It may answer some questions and make things even easier to understand. Hope this helps!

  • @vizagbapuji3073
    @vizagbapuji3073 9 років тому +1

    Very good explanation. Could you also provide us with the slides you have used. Thanks in advance.

  • @skardykats
    @skardykats 10 років тому +1

    Thank you very much for the lecture. It's very helpful, helps me 'decrypt' many articles.

  • @MinhLe-by2te
    @MinhLe-by2te 9 років тому +2

    Thank you very much for this amazing video. This one helps me understand more than than piles of documents I read over weeks did. Excellent work, buddy!

  • @bralecou
    @bralecou 9 років тому +3

    You are a star!

    • @f4dy
      @f4dy 6 років тому +1

      Agreed!

  • @MyAows
    @MyAows 8 років тому +1

    nice way of explanation, thanks indeed

  • @ozzyfromspace
    @ozzyfromspace 5 років тому +1

    There’s a similar function to reshape in numpy (python) but I didn’t understand the relevance. 37:50 made it make sense, thank you 🙏🏽

    • @empossible1577
      @empossible1577  5 років тому

      Ha ha! Very good! I am glad it helped you!

  • @ahmedal-nader9078
    @ahmedal-nader9078 9 років тому +1

    Good work...but what is (( mesh less method))

    • @empossible1577
      @empossible1577  9 років тому

      +Ahmed AL-Nader You may find some answers here:
      en.wikipedia.org/wiki/Meshfree_methods

  • @WthWidU
    @WthWidU 5 років тому +1

    Thank you doctor, very clear explanation

  • @GalinaMalakhova
    @GalinaMalakhova 9 років тому +1

    Excellent, thanks :)

  • @vishalcsharma
    @vishalcsharma 9 років тому

    Thanks for the excellent video! I am wondering if you could provide any academic research papers (citations/references) which published the "generalized finite difference method" that you explained in this video for computing nth order derivative with mth degree of accuracy?

    • @empossible1577
      @empossible1577  9 років тому

      +Vishal Sharma I do not know if that exists. I have written the function to do it, but never published the work. In fact, I simulated some waveguides using something like 8th-order accurate derivatives and saw little accuracy improvement over standard second-order derivatives. I think 4th-order accurate derivatives would be justified for simulating very large structures or for performing very long time-domain simulations where phase errors would have more opportunity to manifest themselves.

  • @martinapetkovska2376
    @martinapetkovska2376 9 років тому +1

    Hi there, where can I find the transcript of this video? It's really amazing :)

    • @empossible1577
      @empossible1577  9 років тому

      +Martina Petkovska There is no transcript, but you can get the latest version of the notes along with other resources from the course website:
      emlab.utep.edu/ee5390cem.htm
      Hope this helps!

    • @martinapetkovska2376
      @martinapetkovska2376 9 років тому

      Thank you for your quick response. This video really helped me understand the finite difference method.

    • @empossible1577
      @empossible1577  9 років тому

      I am very glad it is helping you! There are plenty more lectures for you to see!

  • @ozzyfromspace
    @ozzyfromspace 5 років тому +1

    Using weighting functions is clever, this video has the spirit of supervised machine learning

  • @LongyZ13
    @LongyZ13 11 років тому +1

    Thanks so much for this, really good explanation.

  • @Struct2009
    @Struct2009 3 роки тому +1

    ODE is D4y=g(x) and Boundary Conditons are y(0)=0, y(L)=0 D2y(0)=0 D2y(L)=0 --> How would I set boundary conditions for the second derivatives? Do I subtract the D2y Atom from the top left and bottom right corners of the square matrix? Anyone have any ideas on how to set differential boundary conditions. Thanks for posting all this great information!

    • @empossible1577
      @empossible1577  3 роки тому

      Hmmm...I suspect this differential equation is derived from other differential equations that have lower-order derivatives. If this is the case, I recommend converting those preliminary equations to matrix form with boundary conditions and then manipulate those matrix equations to derive the final equation with the fourth-order derivative. I do a similar thing with electromagnetics. I never just solve the wave equation. Instead, I convert the preliminary equations to matrix form and then derive the wave equation from the matrix equations. Give this some thought.
      BTW, if you are learning the finite-difference method, I recommend starting with Topics 6 and 7 in my Computational Methods course. Skip the numerical integration lectures. Here is the link:
      empossible.net/academics/emp4301_5301/
      I think your level of understanding of the finite-difference method and how to apply boundary conditions will improve 100x by working through these videos.
      Hope this helps!

    • @Struct2009
      @Struct2009 3 роки тому

      @@empossible1577 Yes the two equations are coupled. ODE.1 D4y=g(x) and ODE.2 is D2y=h(x). The trouble is the g(x) and h(x) . Setting boundary conditions on ODE.2 wont influence ODE.4 and vis versa. You will just get two different solutions for y(x). I will write the code up and try working on it some more. Thanks for your recommendations.

  • @anuradhasharma9514
    @anuradhasharma9514 6 років тому +1

    Thank you.. 😇

  • @mauricesunkpal1227
    @mauricesunkpal1227 5 років тому

    I know you have been absent from the comments section for some time now but I just wanted to say thank you for educating a lot of us. Do you have similar videos for other numerical methods like FEM?

    • @empossible1577
      @empossible1577  5 років тому

      Thank you!
      I don't have anything like this for FEM. I do have a course on the finite-difference time-domain method.
      emlab.utep.edu/ee5390fdtd.htm
      I also have a general computational methods course here:
      emlab.utep.edu/ee4386_5301_CompMethEE.htm
      That is it for now!

  • @pegatrisedmice
    @pegatrisedmice 9 років тому +1

    Great! tnx

  • @johanneskarlsson6535
    @johanneskarlsson6535 8 років тому +2

    Thanks alot

  • @LudwigvanBeethoven2
    @LudwigvanBeethoven2 9 років тому +2

    haha black hole :) by the way excellent explanation. thank you so much!

  • @alexanderkuznetsov7597
    @alexanderkuznetsov7597 7 років тому

    "We can approximate any order derivative as just linear sum of function values"
    I don't get this part, how exactly can we get something close to derivative of a function if we just sum all the function values?

    • @empossible1577
      @empossible1577  7 років тому +1

      It is a linear sum, meaning you have to correctly weight each function value. A simple example is this:
      A first-order derivative is the slope of a function. If we know the function value at two different points
      f1 = f(x1)
      f2 = f(x2)
      Then the slope 'm' (rise over run) is
      m = (f2 - f1)/(x2 - x1)
      For uniform spacing dx = (x2 - x1). Since slope is the first-order derivative, we have
      df/dx = (f2 - f1)/dx
      This equation is a finite-difference approximation of a first-order derivative. Now expand the right-hand side of this function as follows:
      df/dx = (1/dx)*f2 - (1/dx)*f1
      See how this last expression is a linear sum of the function values f1 and f2? They just have to be correctly weighted so that the linear sum estimates a derivative.
      In fact, any linear operation (convolution, Fourier transform, integrals, derivatives, etc.) can all be estimated using a linear sum of function values. The only trick is figuring out the correct weights.

    • @alexanderkuznetsov7597
      @alexanderkuznetsov7597 7 років тому

      So "df/dx = (1/dx)*f2 - (1/dx)*f1" is a linear sum. Well I didn't know the term.

    • @empossible1577
      @empossible1577  7 років тому +1

      You got it! :-)

  • @suyashsharma5027
    @suyashsharma5027 9 років тому

    I have a doubt here, and i am surprised why nobody had asked the question straight away below.
    For 2nd order accurate you chose N=2, but for fourth order accurate, you chose N=3 ?
    Why is that?
    In the very start you explained, we choose N=A, where A is the desired order of accuracy.

    • @empossible1577
      @empossible1577  9 років тому +1

      +Suyash Sharma Sorry for this mistake!! It appears that what I have called "4th order accurate" is indeed 3rd order accurate since N=3. Not sure how I missed this, but thank you!!
      I have actually removed the section of these notes for deriving finite-difference expressions. That material now appears in a different class as I reorganize the information among my different courses.

    • @suyashsharma5027
      @suyashsharma5027 9 років тому

      +CEM Lectures No Problem. I actually sat down deriving the results for a 4th order accurate and this can't be anything but only helpful :)
      Other than that, i find it very interesting.
      I had one more question regarding going with central difference methods in both cases. I heard you say that we will need to find the FD at the center for best accuracy. But i felt like there could be an example of doing it with forward and backward approximations as well. I study CFD and it turns out that since in this kind of approach the FD depends on both preceding and succeeding points (Knows as upstream and downstream fluxes) , any disturbance in the succeeding point affects the FD point as well, which is other wise contrary to the general physics.
      And here , only out of curiosity, i felt like trying to figure out a way to fit forward difference and backward difference approximations in the 2 examples but couldn't agree on the origin and formulation of Ax accordingly.

    • @empossible1577
      @empossible1577  9 років тому +1

      +Suyash Sharma There are times when it is done, but it should only be done where you have no other choice. In beam propagation methods, for example, you can use forward Euler which is a simple forward finite-difference to calculate the field in layer 2 from the field in layer 1. The correction to this is called Crank-Nicolson which uses central finite-differences at the expense of a bit more intensive computation. There are also some time marching methods that use backward differences, but this can also be corrected at the expense of more intensive computation. You mentioned upwind and downwind. There are upwind and downwind finite-differences when simulating moving surfaces. These definitely use forward or backward differences. Even more, they have to determine which way the surface is approaching the fastest and calculate differences from that direction only. I put together a quick and fast lecture on this, but only the electronic notes are available. See "Surface Propagation Methods" on the CEM course website, but don't expect too much. These slides are far from polished. They are a work in progress. :-)

    • @suyashsharma5027
      @suyashsharma5027 9 років тому

      CEM Lectures Thank You so much for taking time out to answer my comments. I will go through your slides and follow them over as well. :)

    • @empossible1577
      @empossible1577  9 років тому

      You are welcome!

  • @michaelscottspencer3997
    @michaelscottspencer3997 6 років тому

    I think it may be easier, in the case of D_x^2 in the 2-D grid to build it using a tensor product instead of placing the zeros in afterwards!

    • @empossible1577
      @empossible1577  6 років тому

      Perhaps. Another way is to insert the zeros into the 1D arrays before inserting in the matrices. Good thinking!

  • @sebastianochoa7780
    @sebastianochoa7780 8 років тому

    Hey thank you for the great video! I have a question.
    When implementing the improved method in Matlab and we define the matrices Dx and etc, I later want to define [L]=[Dx-a]. Where Dx is a 4x4 matrix, but a is a constant in an ODE...So do I write "a" as a 4x4 matrix containing each element the constant?
    Much appreciated!

    • @empossible1577
      @empossible1577  8 років тому

      If I understand what you are asking, you would calculate [L] as [L] = [Dx] - [A], where [A] is a square matrix with the diagonal elements containing values of 'a' at each point on the grid. This is talked about at around 48:30 in this video.

    • @sebastianochoa7780
      @sebastianochoa7780 8 років тому

      Thank you! Sorry for the superfluous question :P

    • @empossible1577
      @empossible1577  8 років тому

      No worries. Glad I was able to answer your question!

    • @sebastianochoa7780
      @sebastianochoa7780 8 років тому

      Haha thanks, you're great...Would it be ok if I ask you another one? :P Is there a way I could message you? It pertains solving a relatively simple ODE using this method and is a similar question to the previous one, thank you!

    • @empossible1577
      @empossible1577  8 років тому

      Sure. You can get my contact information from our research website...
      emlab.utep.edu
      Click on the Team tab. I am Raymond Rumpf.

  • @limkherming1158
    @limkherming1158 10 років тому

    hi, what happens when g = 0 in the final matrix equation @ 31.08

    • @empossible1577
      @empossible1577  10 років тому +2

      That means there is no excitation.
      There are two cases I've come across where this happens. By far the most common is when you are solving a differential equation as an eigen-value problem. In this context, you are calculating modes of a system, or a series of allowed states. Another area I have come across this is singular value decomposition problems. That is, solving Ax=0. This happens when you use a method like the transfer matrix method to calculate modes in a waveguide. It is closely related to eigen-value problems.
      Otherwise, it rarely makes sense to talk about a solution to a problem when there is no excitation.

  • @colin_hart
    @colin_hart 4 роки тому +1

    You can find coefficients for kth order accurate nth derivatives in MATLAB with:
    n=2; k=4;
    A=((-k/2:k/2)'.^(0:k))^-1;
    A(1+n,:)