The same concept does apply, yes, though I have yet to check if such a video actually exists, but producing an exercise where the equation is nonhomogeneous and the method still works is relatively easy. This method works for any linear second order equation, homogeneous or not.
Although this method can indeed be considered unpleasant and more complicated than just guessing the solution, the method is very nice, because it proves that the guess is valid, and it does so by using ideas from linear algebra that apply in more general, arbitrary settings, than just differential equations, and in consequence, these ideas are also generalizable to differential linear equations of any order in general. This method also actually makes solving easier when the equation is not homogeneous, to be specific.
@@Mastershayft Let y be a twice-differentiable function, and let p, q, r be functions, such that (D^2)(y) + p • D(y) + q • y = r, where D is the derivative operator. The idea is, let L = D^2 + p • D + q • I be a linear operator. We want functions f, g such that L = (D - f) • (D - g) = D^2 - g • D - D(g) - f • D + f • g = D^2 - (f + g) • D + (f • g - D(g)), which means p = -(f + g) and q = f • g - D(g). Note f = -(p + g), so q = -((p + g) • g + D(g)) = -p • g - g • g - D(g), so D(g) = -(g^2 + p • g + q). This is a first-order equation to solve for g with standard techniques, so now we consider setting (D - g)(y) = z, so (D - f)(z) = r. These are both first-order differential equations with standard techniques. Therefore, all linear second-order differential equations are, in fact, equivalent to a system of three first-order equations, two of which are linear. This all relies on the factorization method. This can all be extended to higher order linear equations, though the complexity naturally increases as the order does. When the coefficients are constant, then the factorization of L becomes trivial, since f and g will also be constants. For special classes of functions p, q, there are well known factorizations giving you f, g. It is just a matter of studying them.
Thanks for your video. But, if you are introducing D, then you can solve as an eigenvalue problem in 11:16 instead of using an integrating factor. It's more easy, ay least for me.
If you watch it from beginning, my D is always d/dx and never d/dy. Also e^-4x follows from separable differential equations which I do not explain here. So please study separable differential equations.
It will be solved in two steps, where each step would involve solving the first order linear differential equation (which I assume that you know). They will involve integrating the function on the right side multiplied by some exponential function, and in some cases the integration will be impossible, where in other cases it can be integrated.
@@daniel_an I was thinking that the inverse of the operator D-a, would be obtained via a greens function, or something like that, and then this would be defined as the actual inverse.
@@mathunt1130 I see. I think the answer is yes, but I'm not sure. You get a double integral because you have to undo two first order differential operators, but maybe you can do a change of variables, so that the answer looks like a convolution???
@Daniel An I think you're making this too difficult. You can solve the simple ODE Dy-ay=f(x) from your favourite method and this will be the definition of (D-a)^-1f(x).
How would you prove the auxiliary equation works, though? You only know it works because you were explicitly told it works, but in mathematics, you have to formally prove that it works, not just take it for granted. This video provides you with the method that proves that the auxiliary equation does in fact provide the correct answers in all circumstances if the equation is homogeneous. Of course, this method also works even if the equation is not homogeneous, which is where the auxiliar equation fails.
(x^2 + 1)·y''(x) = 2·x·y'(x) is equivalent to y''(x) = 2·x/(x^2 + 1)·y'(x). Here, there are two cases to consider: y'(x) = 0, and otherwise. If y'(x) = 0, then simply y(x) = C for every x, where C is an arbitrary real number. Otherwise, y''(x) = 2·x/(x^2 + 1)·y'(x) is equivalent to y''(x)/y'(x) = 2·x/(x^2 + 1). Notice that the derivative of ln[y'(x)] + A is y''(x)/y'(x), where A is an arbitrary real number. Notice also that the derivative of ln[-y'(x)] + B is y''(x)/y'(x), again, where B is an arbitrary real number. Additionally, notice that 2·x/(x^2 + 1) is the derivative of ln(x^2 + 1) + D, where D is an arbitrary real number. Therefore, there are two cases to consider here: y'(x) < 0, or 0 < y'(x). In the former, ln[-y'(x)] + B = ln(x^2 + 1) + D, while in the latter, ln[y'(x)] + A = ln(x^2 + 1) + D. These are both equivalent to ln[-y'(x)] = ln(x^2 + 1) + D - A = ln(x^2 + 1) + E and ln[y'(x)] = ln(x^2 + 1) + D - B = ln(x^2 + 1) + F respectively. These are equivalent to y'(x) = -exp(E)·(x^2 + 1) and y'(x) = exp(F)·(x^2 + 1) respectively. These two cases, together with with the case y'(x), can all actually be summarized into a single case, by letting y'(x) = G·(x^2 + 1), where G is an arbitrary real number, negative, zero, or positive, all allowed. Therefore, y(x) = G·(x^3/3 + x) + C.
@@iLoveTortex No, the antiderivative of a function is given by a set of a functions, that, when all differentiated, give you the original function. Every constant function differentiates to the zero function, so the antiderivative is the set of constant functions.
This is an amazingly lively yet carefully paced lecture.
1 whole unit in 17 minutes. Thank you.👍
Is there a video using linear differential operator for non-homogeneous second order equations?
The same concept does apply, yes, though I have yet to check if such a video actually exists, but producing an exercise where the equation is nonhomogeneous and the method still works is relatively easy. This method works for any linear second order equation, homogeneous or not.
Although this method can indeed be considered unpleasant and more complicated than just guessing the solution, the method is very nice, because it proves that the guess is valid, and it does so by using ideas from linear algebra that apply in more general, arbitrary settings, than just differential equations, and in consequence, these ideas are also generalizable to differential linear equations of any order in general. This method also actually makes solving easier when the equation is not homogeneous, to be specific.
Ah. I haven't thought about using it for non-homogeneous. Good idea!
I'm curious, could you elaborate on how we could potentially use the method for non-homogeneous second order differential equations ?
@@Mastershayft Let y be a twice-differentiable function, and let p, q, r be functions, such that (D^2)(y) + p • D(y) + q • y = r, where D is the derivative operator. The idea is, let L = D^2 + p • D + q • I be a linear operator. We want functions f, g such that L = (D - f) • (D - g) = D^2 - g • D - D(g) - f • D + f • g = D^2 - (f + g) • D + (f • g - D(g)), which means p = -(f + g) and q = f • g - D(g). Note f = -(p + g), so q = -((p + g) • g + D(g)) = -p • g - g • g - D(g), so D(g) = -(g^2 + p • g + q). This is a first-order equation to solve for g with standard techniques, so now we consider setting (D - g)(y) = z, so (D - f)(z) = r. These are both first-order differential equations with standard techniques. Therefore, all linear second-order differential equations are, in fact, equivalent to a system of three first-order equations, two of which are linear. This all relies on the factorization method. This can all be extended to higher order linear equations, though the complexity naturally increases as the order does.
When the coefficients are constant, then the factorization of L becomes trivial, since f and g will also be constants. For special classes of functions p, q, there are well known factorizations giving you f, g. It is just a matter of studying them.
@@angelmendez-rivera351 How do you get the -D(g) when developing (D - f)(D - g) ?
my lecturer in love with d operator for non homogenous second order
Wow, this is gold. Thank you so much professor ❤.
Amazing! Proof made easy!! Thank you Prof for sharing this!!
Thank you very much professor An. That was excellent explanation.
Great video .. thanks Daniel
This is a video of curious people (all over 18) discovering the amazing properties of the D.
bruh
Thanks for your video. But, if you are introducing D, then you can solve as an eigenvalue problem in 11:16 instead of using an integrating factor. It's more easy, ay least for me.
I got my doubt cleared..thnx very much
Why is it the derivative at 11:11: dz/dxand not dz/dy? and how do you further come to the e^-4x...
If you watch it from beginning, my D is always d/dx and never d/dy. Also e^-4x follows from separable differential equations which I do not explain here. So please study separable differential equations.
Thnx a lot! Perfect👍🏼
Now suppose you have a RHS which is a given function. Can you use this method?
It will be solved in two steps, where each step would involve solving the first order linear differential equation (which I assume that you know). They will involve integrating the function on the right side multiplied by some exponential function, and in some cases the integration will be impossible, where in other cases it can be integrated.
@@daniel_an I was thinking that the inverse of the operator D-a, would be obtained via a greens function, or something like that, and then this would be defined as the actual inverse.
@@mathunt1130 I see. I think the answer is yes, but I'm not sure. You get a double integral because you have to undo two first order differential operators, but maybe you can do a change of variables, so that the answer looks like a convolution???
@Daniel An I think you're making this too difficult. You can solve the simple ODE Dy-ay=f(x) from your favourite method and this will be the definition of (D-a)^-1f(x).
@Mat Hunt Oh. But I was talking about second order, which would require doing the process twice.
great job sir
Beautiful Lecture
for such an equation the auxilliary quadratic equation is much easier to use
Cska Jama That's the point of the video: to show that solving it directly is hard and therefore another approach is better.
How would you prove the auxiliary equation works, though? You only know it works because you were explicitly told it works, but in mathematics, you have to formally prove that it works, not just take it for granted. This video provides you with the method that proves that the auxiliary equation does in fact provide the correct answers in all circumstances if the equation is homogeneous. Of course, this method also works even if the equation is not homogeneous, which is where the auxiliar equation fails.
Very great thank you Sir
Thank you sir
How to solve y"(x^2+1)=2xy'
(x^2 + 1)·y''(x) = 2·x·y'(x) is equivalent to y''(x) = 2·x/(x^2 + 1)·y'(x). Here, there are two cases to consider: y'(x) = 0, and otherwise. If y'(x) = 0, then simply y(x) = C for every x, where C is an arbitrary real number. Otherwise, y''(x) = 2·x/(x^2 + 1)·y'(x) is equivalent to y''(x)/y'(x) = 2·x/(x^2 + 1). Notice that the derivative of ln[y'(x)] + A is y''(x)/y'(x), where A is an arbitrary real number. Notice also that the derivative of ln[-y'(x)] + B is y''(x)/y'(x), again, where B is an arbitrary real number. Additionally, notice that 2·x/(x^2 + 1) is the derivative of ln(x^2 + 1) + D, where D is an arbitrary real number. Therefore, there are two cases to consider here: y'(x) < 0, or 0 < y'(x). In the former, ln[-y'(x)] + B = ln(x^2 + 1) + D, while in the latter, ln[y'(x)] + A = ln(x^2 + 1) + D. These are both equivalent to ln[-y'(x)] = ln(x^2 + 1) + D - A = ln(x^2 + 1) + E and ln[y'(x)] = ln(x^2 + 1) + D - B = ln(x^2 + 1) + F respectively. These are equivalent to y'(x) = -exp(E)·(x^2 + 1) and y'(x) = exp(F)·(x^2 + 1) respectively. These two cases, together with with the case y'(x), can all actually be summarized into a single case, by letting y'(x) = G·(x^2 + 1), where G is an arbitrary real number, negative, zero, or positive, all allowed. Therefore, y(x) = G·(x^3/3 + x) + C.
@@angelmendez-rivera351 good
amazing
gud
how can be the integral of 0 = C !!!
its a constant, C represents any contant that could have been differentiated to get 0
Integral of 0 is 0, but u still need to +C
@@iLoveTortex No, the antiderivative of a function is given by a set of a functions, that, when all differentiated, give you the original function. Every constant function differentiates to the zero function, so the antiderivative is the set of constant functions.
Pog
Emotional damage