A deceivingly difficult differential equation.
Вставка
- Опубліковано 26 бер 2022
- 🌟Support the channel🌟
Patreon: / michaelpennmath
Merch: teespring.com/stores/michael-...
My amazon shop: www.amazon.com/shop/michaelpenn
🟢 Discord: / discord
🌟my other channels🌟
Course videos: / @mathmajor
non-math podcast: / @thepennpavpodcast7878
🌟My Links🌟
Personal Website: www.michael-penn.net
Instagram: / melp2718
Randolph College Math: www.randolphcollege.edu/mathem...
Research Gate profile: www.researchgate.net/profile/...
Google Scholar profile: scholar.google.com/citations?...
🌟Suggest a problem🌟
forms.gle/ea7Pw7HcKePGB4my5
I hear a lot of talk about how easy it is to "abuse Leibniz notation" so I'd love to see a video that explains
1. Why it's an "abuse"
2. Why it works nonetheless
3. When it doesn't work
Check the chain rule for second derivatives, things are really only nice for first order derivatives.
The theory of Differential forms show why treating differentials algebraicly works in many instances anyway.
I've written about why Leibniz notation does/doesn't work quite a lot, so I'll just condense it down into the key points. Math stack exchange Q 21199 is a great page to see the key points and references.
We immediately see that considering Leibniz notation as a fraction fails, on a naive level, because dy and dx are not true numbers. They do represent real numbers brought, in the limit, to zero, but if they were simply 0, then dy/dx must be 0/0, which is, of course, undefined. So, at an intro real analysis level, we'd tend to abandon that fraction view and opt for a standard analysis view that dy/dx is y'(x), and not a fraction but a function defined by a limit in terms of y(x).
It also can be said to fail in practice when we get equations that wouldn't be true if "dy" and "dx" (or similar terms) were objects in their own right. If f(x,y) is a function of 2 variables, then dy/dx = (- df/dx)/(df/dy), where the RHS "d"s are for partial derivatives. Multiplying that out as if all terms were algebraic variables, we see dy/dx = - dy/dx, with an uninvited minus sign.
So, evidently, there's some problems that you might want to avoid. But at the same time, it works incredibly well as a heuristic, e.g., for the inverse function theorem (dy/dx = 1/[dx/dy] ), differential equations (dy/dx = ... --> \int dy = \int ... dx ), and so on. So, while these can be reframed in terms of standard analysis and avoiding the fraction heuristic entirely, there must be some interesting math going on that makes the heuristic possible and more than just an abuse of notation. (Likewise, for Euler's imaginative math, it may have been "illogical" and unjustified by the framework at the time but later frameworks could accommodate it. For instance, expanding sin(x) as a product over its roots was later justified by Weierstrass.)
And indeed, we do see that there's some good justifications. One is, as Joji mentions above, differential forms. Another is nonstandard analysis and considering dy and dx as infinitesimal hyperreals that are "equivalent", in one sense, to the real number 0, but do not introduce the 0/0 problem.
When I studied physics in Germany many years ago my maths professor had it "verboten".
Loosey-goosey definitions of dx, etc, are also useful for deriving physical equations like Navier-Stokes from a shell balance. For rigor, you would need a finite element and then take the limit as that element becomes infinitesimal. For chump derivation after you've been doing it for years, you can just call the element's thickness dz, for example. In this case, you need to be careful to know your spherical/cylindrical geometry when applicable.
It pretty much always works (assuming you are doing something reasonable, not like (dy/dx)^2 = (dy)^2/(dx)^2 or whatever) for single variable functions.
as mentioned by others, just multiply the initial equation by y' and take the first integral and you get (y')^2=2A-2exp(y) which is straightforward to solve. for that just follow the last 4 minutes of the video
Agree. Any time I see a second order non-linear diffeq, my first thought is "can I multiply by y' and integrate immediately?"
Exactly. Surprisingly, you also get this same equation by "abuse" of notation and writing y" as y'*(dy'/dy).
that's what I tried too; all in all, it eliminated like one substitution from the whole process, so just a slight optimization 😅
wait how did you get 2A- 2exp(y) in the RHS
@@davidblauyoutube I don't see why anyone would think kf that ever..it's out of nowhere..and why would anyone use the t substitution he dkes in thenvideo..I don't see why anyone would think of that ?? And wait if you multiply by y you camt separate the variables or how? And hiw .you can't integrate y prime multiplied by y double prime when you don't know the value for either of them..
This equation is in the form of Newton's second law with force -e^y. It comes from a potential energy e^y and we thus get by conservation of energy E=1/2y'^2+e^y, with E a constant.
Nice to see another person who thinks alike.
Still need a way to explicitly get y as a function of t (or x), though.
@@yoseftreitman7226 the energy conservation equation is a separable first order ODE, so that can be solved by standard methods
@@joshuatilley1887 Well, no, not all first-order separable ODE's can be solved explicitly using standard methods. (Consider y' = sin(t)/t, for example.) I don't doubt that you've seen something I'm missing (I haven't watched the full video) but just reducing the problem to a first-order equation does not make it easy to solve explicitly in closed form by hand.
@@yoseftreitman7226 The point is this is a perfectly standard type of equation and all you have to do is use the standard method.
When I applied to UCSB, I submitted a paper on a generalization of this, where I showed that we can almost always solve y^(n)=ay^b or y^(n)=ae^(by) for all n. I think it’s a big part of why I got into my program!
Also over the past couple of weeks I came up with conditions for families F of functions such that y^(n)=f(y) with f in F can be solved by y’=g(y) for some g in F
Incredible!! Could I know more on this?
@@noedeverchere2833 You can probably work through it yourself. Let y^(n)=ay^b. Then make a “guess” that y’=cy^d. Then if you differentiate with respect to x, you get y’’=cdy^(d-1)y’=c^2dy^(2d-1). Then repeat this step until you get to y^(n) and set the two sides equal. You will almost always be able to solve (c,d) in terms of (a,b). The strategy is similar for y^(n)=ae^(by). Let me know if you have any questions.
Multiply initial equation with 2y', and then integrate, we got 1st order ODE: y'^2 + 2e^y = C, which is separable equation, if move 2e^y to the RHS, and take a square root (don't forget about +-). Separate and then integrate, and we get a solution for this equation.
2y' looks like an integrating factor
Yes, that was equivalent to my approach, which was to set u =y', so y" = u' = u (du/dy) = 1/2 d(u^2)/dy, which takes you to the same place. Yours is more direct. :)
@@adandap yes, this is also technique of integrating equations without an argument (autonomous DEs), make "y" is a new variable, and y' is a new function of variable "y", like u(y) = y'
I did the same thing. Just grunt work with partial fractions from there. Similar technique for Hooke's Law.
As a physicists, that's the first thing that I thought about
That was absolutely amazing! A complete review of differential equations in one video. :-)
Your videos on differential equations definitely are my favorite!
Wow that seems like an immensely powerful technique towards the middle, where the multiple substitutions allow you to change it from y in terms of x, to something in terms of something, but with a much simpler eqn that can be integrated more easily, i bet theres more uses for this strategy
For a physics student, this is just a Newton equation. By using kinetic energy theorem, one can fastly get (y')^2=C-2e^y. Then separate the variables and do the integral.
I wasn't fond of differential equations when I was a student. But there are definitely some cool problems in this area. Great video, Michael!
Wow, we're not in kindergarten math anymore. There were few techniques to solve DEs that were NOT used. I love DE. But I wonder who found this solution first and how long it took them to find it. Thank you. Well presented.
But I haven't finished my juice! 😭 Also, I need a nap. I'm going back to kindergarten.
@@whatelseison8970 A student who I was tutoring on zoom fell asleep. They apologized. I said "No apology needed, I can't learn when I'm sleepy."
No eigenvalues, 0/10!
@@chrislankford7939 "Turning it into an eigenvalue problem is left to the student." Shall we create some for the other comment readers?
This is totally standard but the video explains nothing. It's a particle moving in a potential U = e^-y.
I multiplied by 2y' in the beginning instead, and integrated the equation with respect to x. I then used separation of variables. In the integral, I used a hyperbolic trig substitution in place of partial fraction decomposition. I got an answer in terms of a natural logarithm of a hyperbolic secant in the end, after simplification.
Love watching these videos. The tricks and rigor are fun. Always reminds me of endless weekends chewing on problems like this. Thanks Shawn
Standard autonomous equation substitution y'=p(y) leads to p^2=A^2-2exp(y). Then sub t=ln(y) leads to equation you solved at the end of the video
Thanx a lot for the substitution variables in this one! 😉
12:45 Are there not solutions on the negative branch of the u = w² substitution as well?
Yeah, lots of solutions left out…
I think it ends up not being a necessary consideration, as it ultimately just gets absorbed into the arbitrary constant A (if you follow where the negative sign would go, it ends up outside the integral, whose antiderivative starts with arbitrary constant 1/A).
And in terms of the substitution for z, you end up squaring z after undoing the substitution anyway, so it ultimately doesn't matter there either.
It would have been nice to address that branch explicitly though. As well as all of the occasions on which we divided by variables or arbitrary constants that could be 0, and discussed the possible values of the arbitrary constants (especially when B=e^b, which significantly limits the possible values of B).
A simpler solution method starts by multiplying both sides by 2y', from which, on antidifferentiating both sides with respect to x, (y')^2+2e^y=A^2; then y'=sqrt(A^2-2e^y) is separable, but still a few substitutions are required to find the indefinite integral of 1/sqrt(A^(2)-2e^y) with respect to y, and then there will still be difficulty in solving the resulting equation for y in terms of x.
I continue to wish that, after these amazing derivations you would show us what the (family of) functions look like.
I agree, ODEs are much easier to understand when you draw pictures.
I took part in iepho, there was some tasks about humidity with formulas looking like answer and even worse :)
That "w" substitution is a neat and pretty general technique!
12:31 I think we need some explanation of why we only keep the positive root, such as that once you have a negative definite t your exp(y) make no solution in reals or something.
That is some hard math over there! Loved how you did it. I would definitely could not do it
Wow. Didn't see that crazy answer coming!
For some reason I was thinking about this differential equation last week. Very nice video!
Oh nice, i solved almost exactly the same equation while i was studying in university, it was just y''=e^y.
General method we were taught to solve equations y''=f(y): make a substitution y'=p(y) => y''=p'*y'=p'*p, so we got first order ODE with separable variables: p'*p=f(y).
Very nice example
The “abuse” of Leibniz notation seems to be aligned with my intuitive notion that the derivative of a function is a measure of how sensitive the value of that function is to change given some arbitrarily small change in x. This also aligns with the definition of a derivative as delta y / delta x as delta x approaches 0.
I’ve been told that my intuitions are “on the right track,” but not quite accurate-effectively I’m describing a ratio of differentials instead of a derivative.
Where am I mistaken? For context, I’m only familiar with single variable calculus and the most basic of ODEs.
There is one more magnificient but pretty known method of solving of such DE, which can help to solve this for several minutes. I can give a clue: use the fact, that DE contains no x other than in derivatives.
Getting a feeling for the resulting function when we set all constants of integration to 1:
y(x) = ln2-|x| for large |x|
with a nicely rounded apex that crosses the y axis at -ln2
I solved a very similar equation once, y"=e^(2y), with ln(sec(x)) being the obvious solution. I don't remember the general form.
I used a different trick, I multiplied both sides by y', and from there integrated both sides with respect to x. You can pretty easily get a separable equation form that.
At about 15:15 you say... "We're really into the weeds here" that's the under statement of the week. Lol! I finally got out of the weeds with the help of a coast guard rescue helecoptor! Lol. Weeds and lots of mud! Followed everything to the end however. Lots of plodding through the weeds but I found my way out! Very nice trip! Lol.
In the general case of the equation y''+f[y] = 0 one has ( y ')^2 + 2F[y] = const as a first integral . Here F[y] is the antiderivative of f[y]. From there one can
go on and find an explicit solution in number of cases .
This is a very, very interesting example! The form (a+x)/(a-x) comes up again and again in relativity (e.g. in the formula for aberration, and in general because linear fractional transformations of the complex plane are mapped onto Lorentz transformations in spacetime). Also of interest is approximating e^y by 1 + y, then you get the equation for a forced oscillation (with constant forcing term).
i used laplace transform but i think this is also a wonderful way to solve it
18:41
Dang, the end is so satisfying!
It's funny to think that technically you can do this with no substitutions
wait, it's not that difficult.
what about the hamiltonian thing:
y'' + e^y = 0 | * y'
y' y'' + y' e^y = 0 | integrate
(y'^2)/2 + e^y = E - first integral
we moved from second order to first order differential equation, you can split the variables like so
dy/dx = sqrt(2E - 2e^y)
dy/sqrt(2E - 2e^y) = dx
and then try to calculate this integral, wolfram gives the same looking answer (because tanh^-1 is log in disguise)
but actually, there's another trick, if one considers u = e^y
then u'' = (y'' + (y')^2) * e^y
add two first equations and get
(y'' + (y')^2) + 3e^y = 2E
multiply by e^y
u'' + 3u^2 = 2E*u
do the first integral thing again
(u')^2/2 + u^3 - E u^2 = H
this one is easier to integrate, but it would give 3 constants, maybe some of them are dependent on each-other, maybe there's a mistake somewhere
Hi Michael, my name is Gustavo and have recently discovered your videos and soon became a follower. I am a theoretical physicist, and enjoy a lot with mathematical problems like the ones you post regularly. Regarding the diff eqn of this video, I've thought in making y= ln f(x) instead. It seems to give a different non linear equation that involves f' and f^2. But I did not pursue this way in any detail yet. Anyway, this is just a comment. Besides this, whatever you can show about the Riemann's Zeta function, it'll be great. Am a fan of Riemann., especially what regards his famous function.
Keep posting interesting videos. As a theoretical physicist am interested also in functional equations, as they appear in my research.
All the best.
Thank you for maing my birthday even better :D
Must be a bunch of things that we cancelled out of the denomenator that therefore cannot be equal to zero. Is it safe to lose track of these?
What happened to the +/- when taking the square root?
I agree with physicists here: for any U(y) equation y' '+dU(y)/dy=0 is immediately reduced to the first order ODE using its integral (1/2) (y')^2+U(y)=cons (energy). This is a standard straightforward way of solving such problems.
The substitution w(t)=t' is quite surprising, it resembles something called an hodograph trransformation.
y(x)= - 2*ln [cosh(C1*x+C2)/(sqrt(2)*| C1|)] . here C1≠0 and C2 are arbitrary constants.
The answer was received manually, analytically. That, the answer is correct, you can verify, for example, using MATHCAD.
Edit:
Everything bracketed below is all wrong. I apparently forgot how to factor, and even then it wouldn't work. I should not try to solve things in my head while eating lunch. I do have a question about the y``=y`(dy`/dy) "abuse". Is it that because with the chain rule you are treating it like a functional so that y` has the form f[y(x)] and Leibniz rule is for functions of the form f(y,x)? At least with the calculus of variations I'm pretty sure there is no difference between f(y,x) and f[y(x)] (at least that's what quantum field theory books say). I'm pretty sure you get all the same infinitesimal generators with either function space. Are they just isomporphic to each other?
{I'm pretty sure this is seperable in the following manner, (d/dx-ie^y/y)(d/dx+ie^y/y)y=0. The 1st order diff eq of each root gives an answer of the form y=ln(ix+C). I think at the end of the day using Euler's identity, this is the same solution, but a much more simple method.}
Ouch.
Each step seems to make sense but using the substitution from beginning makes things much more complicated than necessary.
y'' = e^y -> y' * y'' = e^y * y' -> 1/2 * ( (y')^2 )' = (e^y)' -> 1/2 * (y')^2 = e^y + A, A as constant. -> y' = sqrt(A - 2*e^y) . This degrades the equation to first derivative. From here, you can substitute t = e^y , and get the same result.
Not sure if actually THAT involved or just showing off.
This can be solved in less than one minute by multiplying the equation by y' and directly integrating it twice, using a known tabularized integral.
So I've noticed that, say, 2nd order differential equations will have 2 constants which show up in the solution (A and B in this case). Is there some particular reason for this? Like, are solutions to a differential equation always some sort of manifold with dimension equal to the order of the equation embedded into some space of functions or something?
To explain it simply, the order of the derivative determines how many times one would have to integrate and therefore the number of constants of integration the function needs. For example if one solves the equation d3y/dx3=0, then integrating both sides once produces a constant (which I’ll call A) to make d2y/dx2=A. Integrating again produces another constant B and gives the equation dy/dx=Ax+B. And then the last integration gives the function y=(A/2)(x^2)+Bx+C.
This is the stationary solution of the Liouville field theory
u_{tt} - u_{xx}=exp(u),
a PDE for u(x,t) which you can also solve exactly.
There is a much shorter way to get the solution using the first integral (y')^2+2*Exp[y]= const. One only has to use first the new dependent variable
u[x] = Exp[y[x]-yo] , yo=value at x= 0 ,and then realize that the integral of 1/(u √(1-u) is arth[√(1-u). The rest is easy.
the word 'deceptively': "am i a joke to you?"
As others have pointed out, this is familiar in physics. Let p=y', then a Hamiltonian H(p,y)=p^2/2+e^y gives equations y' = partialH/partial p = p and p' = -partialH/partialy = -e^y, so y''=p'=-e^y. p is momentum and e^y is a potential. H is conserved because x, the independent variable doesn't appear; dH/dx=partial H/partialx = 0, so y'^2/2+e^y= constant and this is a separable equation; just one integral to go. I guess my point is that I was surprised that it's characterized as "deceivingly difficult". But I love your channel anyway, Penn.
at 15:04, how did dt/t vanish?
What about W= - sqrt(u)
Why are we only looking at positive roots of u
I tried to solve by integrating y'' to get y', but it doesn't seem integral exp(f(x)) dx has a solution
Can you not further simplify the answer by breaking down the ln? y = ln(2A^2 B e^AX) - ln((1+Be^AX)^2) = AX + ln(2A^2 B) - 2 ln(1+Be^AX). I guess sadly the 1+ in that last logarithm stops any more simplification, but...
Where does this minus come from at 14:48
Nice problem! I was just wondering how we can assume the constant to be A^2 without knowing it must be non-negative, but we know that since the constant is u/t^2+2t and u>=0, t>0 because u=w^2,t=e^y. Also factoring out t^2 of the square root and simplifying sqrt(t^2)=t wouldn't work without knowing t is non-negative.
if y''+exp(x)=0 then y''(x)
The classic test for correctness is to put the solution into the DE. However, your question is deeper than that. I bet that you can see that there might be more math buried in there that can be found. You raised a very good question.
Could we call 2A^2B at the end as a new Constant?
I don't understand how you cleared the denominator. Wait, did you put t on the other side of the equal sign and then multiply both by t^2? Obviously. But you did it without thinking or showing what you were doing, which makes me think there's a simplification rule I don't know.
y''+e^y=0 => y'^2+2e^y=c_1 => y'=sqrt(c_1-2e^y) => dy/sqrt(c_1-2e^y)=dx. End up with hyperbolic tangent?
how ironic, i stumbled upon this eq with a right handside of the form y''+a*exp(y)=x yesterday
What about multiplying the ORIGINAL equation by y prime? You can integrate than immediately and spare a lot of work....
How did he get the ln(1-(z/A)) - ln(1+(z/A)) thing? Help Please
How come you can assume w to be positive?
An ordinary differential equation from the family "not containing x explicitly" . Standard reduction of the order of the equation by one substitution y'(x) = p(y(x)),
then y"(x)=d(y' (x) )/dx=dp(y(x))/dx =dp(y)/dy * dy(x)/dx = dp(y)/dy *p.
y'' +e^y=0 => p*dp/dy = -e^y => 2p*dp=-2 e^ydy => p^2 = -2 e^y +C0 (C0>0) =>
p ≡ dy/dx = ±sqrt (C0 - 2*e^y) => dy/sqrt (C0 - 2*e^y) =± dx =>
1/(sqrt(2))* e^(-y/2)dy/sqrt(C0/2 *e^(-y)-1) =± dx .
Denote t ^2 =C0/2 *e^(-y) =>
[2/sqrt(C0)]*(- dt)/sqrt(t^2-1)=± dx => dt/sqrt(t^2-1)= (sqrt(C0)/2)* (± dx).
Denote C1≡ sqrt(C0)/2 >0 , then t= sqrt(2)*C1*e^(-y/2).
dt/sqrt(t^2-1)=±C1 dx => ln(t+sqrt(t^2 -1)) =± C1*x+ C2.
t+sqrt(t^2-1) = exp(±C1*x+C2)=> sqrt(t^2-1) = exp(±C1*x+C2)-t.
t^2-1 = [exp(±C1*x+C2)]^2 -2*exp(±C1*x+C2)*t +t^2.
t= {[exp(±C1*x+C2)]^2 +1}/[exp(±C1*x+C2)]= {exp(±C1*x+C2) +[-exp(±C1*x+C2) ]}/2=
= cosh[exp(±C1*x+C2)].
sqrt(2)*C1*e^(-y/2)=cosh[exp(±C1*x+C2)].
Answer: y(x)= -2*ln[cosh[exp(±C1*x+C2)/(sqrt(2)*C1)], C1>0, C2 - arbitrary constant, or
y(x)= -2*ln[cosh[exp(C1*x+C2)/(sqrt(2)*|C1|)], here C1≠0, C2 - arbitrary constants.
Hell, that was complicated with all the substitutions.. Probably easier would have been to realize that after the substitution t=exp(y), the ode reads (with t'=dt/dx)
d/dx t'/t=-t
2 * t'/t * d/dt t'/t = -2
d/dt (t'/t)^2 = -2
(t'/t)^2=-2f+A^2.
You can get to 13:53 a lot quicker! [I see a few others found this]
y'' = -exp(y)
Multiply both sides by y'
y'y'' = -y'exp(y)
Both sides are perfect differentials.
(y'^2)'/2 = -(exp(y))'
Integrate...
y'^2 = 2(-exp(y) + c)
y' = sqrt(2(c - exp(y)))
This is separable
dx = dy/sqrt(2(c - exp(y)))
Only the integral is left
x = int(dy/sqrt(2(c - exp(y)))
Substitute t = exp(y), dt = t*dy
x = int(dt/[t*sqrt(2(c - t))])
12:32
why is not the sqrt plus-minus?
The t substitution is pretty natural.
But using the chain rule to transform the equation from smth involving derivatives in x to derivatives in t is really clever. You have to be intentional, you have to want to change a derivative term in x to a derivative in t because all the rest of the equation is in t. That's a leap.
The integrand step is technical. One could use the general solution to first order equations. It works just as well. It is less technical, but also less elegant.
The other substitutions are obvious and the calculations follow. It's lengthy, but nothing hard.
A more rigorous proof would take care of the fact you can divide by t (obvious, but still worth mentioning), and actually write A^2 for the integration constant. In a first approach, you would not write A^2, you'd do it after a first attempt and wishing you had a square.
Same kind of deal with the integration to ln functions involving Abs(), you have to discuss the sign. It's easy but it should be done.
You can use the hyperbolic tangent function rather than deal with these asymmetrical exp.
Given A and B are constants (depending on initial conditions), one can rewrite them to convenience (while still paying attention when factoring things)
You then arrive at
y = ln ( 2*F^2 / cosh^2(F*x + E) )
Playing with hyperbolic derivatives and identities, it's a 4 liners to do a sanity check that y'' = -exp(y)
This was great fun.
At the beginning, shouldn't we say that t can't be equal to zero, since it appears in the denominator? Also, can we safely assume this or does it require more work?
t = exp(y), and the exponential function is always non-zero. If y is only Real, this should be obvious (positive number to a power won't ever be zero). If y is complex, separate it into real and complex parts; again real part can't be zero, complex part can only be zero if both Sine and Cosine are simultaneously zero, which never happens.
(edited to add thumbnail of proof.)
@@xizar0rg Right. I haven't thought of that. Thanks.
There were integrals in this, so maybe 'integral suggester' was the right name after all!
@15:29: It would've been easier to rewrite the integral as 1/A*(Ln(A-z)-Ln(A+z)).
Conservation of energy!
There is a trick:
d^2(y)/dx^2 = d/dx(dy/dx) = d/dy(dy/dx) * dy/dx = d/dy(y') y' = d/dy(y'^2/2)
Thus,
d/dy(y'^2/2) + e^y = 0
and we can integrate this in terms of y.
y'^2/2 + e^y = C1
y' = +-sqrt(2*C1 - 2*e^y)
This turns into integration
+- Int dy/sqrt(2*C1 - 2*e^y) = x + C2
Integral seems solvable. See thread.
Disclaimer. This is more a proof that integral is solvable. Not a total solution. And I could do mistakes but I hope I didn't.
Let us denote 2*C1 as A
We do substitution u = A - 2*e^y.
du = -2*e^y * dy = (u-A)*dy
we get
Int du/[(u-A)sqrt(u)] =2 Int dv/(v^2-A) where v = sqrt(u)
if A is positive or zero or negative we get different options.
(+) for positive A= a^2:
2 Int dv/(v^2-a^2) =1/a ln(abs( (v-a)/(v+a))) = x + C2
so
abs( (v-a)/(v+a) ) = exp(a*x + C2*a)
(0) 2 Int dv/(v^2) = -1/v = x + C2
(-) for positive A= -a^2:
2 Int dv/(v^2+a^2) =1/a atan(v/a) = x + C2
all cases look solvable in terms of v.
Then we do back transformation for each case: v->u->y.
Note. We could use sqrt(A) in complex sence to do all cases in one step.
The trick is actually the same as having u(y(x))=y'(x). Differentiating both sides with respect to x, you get
u'(y)*y'=y''->u'u=y''
u'=d/dy(dy/dx)=du/dy
and u=dy/dx
Its much easier at least for me to think about it this way instead of switching dx's and dy's
@@lih3391 do as you please 🙂
Wolfram Alpha gives a different solution, y(x) = log(1/2 (c_1 - c_1 tanh^2(1/2 sqrt(c_1 (c_2 + x)^2)))).
If I use Alpha to plug your solution into the original equation, it yields (2 A^2)/(B e^(A x) + 1) - (2 A^2)/(B e^(A x) + 1)^2 which is not obviously zero.
Maybe he has some factors of 2 wrong in his answer. As suggested in other comments, it is best to first reduce it to a 1st order ODE, then check the solution.
Does the equation have the unique solution if y(0)=0, y’(0)=c? The equation seems to have 2 coefficients B.
Probably, I was wrong. I guess B should be below 1 and therefore only one solution
I am NOT ready for next semester 😂
Please do y"=sin(y)
1:21 someone being murdered in the background
Just multiply everiting by y' and integrate. No substitution needed
Phương trình vi phân cấp 2.
is there a physical system that is modeled by y'' + e^y = 0?
Probably not but could be
Looks similar to a simplification of the Poisson-Boltzmann equation, which has a lot of applications apparently
It is a variant of so-called stationary thermal explosion equation, if you are interested, look for Frank-Kamenetskii.
It's the stationary solution of Liouville field theory, a classical version of one of the simplest quantum field theories. If I remember rightly this PDE also appears in differential geometry. You can replace e^y by sinh y or sin y, which gives sinh-Gordon or sine-Gordon, and you can also do
e^y -e^{-2y} which is called the Tsitzeica equation. All these cases are explicitly solvable, in a certain sense (the ODEs, and the PDEs they are reduced from).
At some point, I was worried that you'd run out of letters for your substitutions... :)
2:45 (e^y)' = y'e^y = -y'y'' = -1/2 (y'^2)' => e^y = -1/2 y'^2 +c, which is a separable first order DE.
Is it really? I dont think that was a good place to stop. It’s screaming to be simplified!!
Damn it, now I gotta do it myself.
I loved it btw. Thank you for the amazing content.
y(x)= - 2*ln [cosh(C1*x+C2)/(sqrt(2)*| C1|)] . here C1≠0 and C2 are arbitrary constants.
The answer was received manually, analytically. That. that the answer is correct, you can verify. for example. using MATHCAD.
More d.e.q. please
That's a good place to stop...
This diff eq is one big abuse of replacement of variables :)
This was great and here’s my try on the ode:
Differentiate both sides of the initial ode to get
y’’’+e^y y’=0 but y’’=-e^y so y’’’=y’’y’.
We can then say u=y’ to get u’’=u’u and then integrate both sides so that
u’=(x^2+4C^2)/2 which means (1/2C)arctan(u/2C)=x/2+K or u=2Ctan(Cx+2kC)=y’ which means y=A+ln(sec^2(Cx+2kC)). We then substitute this back into our initial ode to find information on some constants. Finding y’’ and e^y gives us that 2C^2=-e^A which means C=(i/sqrt(2))exp(A/2)). We then let B=exp(A/2)/sqrt(2) so that A=ln(2B^2) and we get y=ln(2B^2)+ln(sec^2(Bix+2kiB))=ln(2B^2sech^2(Bx+2kB)).
So y is of the form y=ln(2B^2sech^2(Bx+P)) which I think is what u got but I didn’t write out sech in terms of e
I'll be reviewing my DEs because this went over my head a bit lol.
And out of curiousity how would you solve the discrete analogue?
Do you mean discretize the continuous equation and solve it? If so, aren't there many ways to formulate the discrete equation?
How would you make the exponent discrete?
@@Noam_.Menashe I have no idea, it's part of my question, but I didn't ask it very well. My bad
@@abrahammekonnen I would guess 2^n.
@@Noam_.Menashe Expand it in a Taylor series. Drop any terms which contribute at less than the step size or smaller than the grid size. I'm sure there are many other ways to deal with it also. That was my point, there are an infinite number of ways of approximating a concept.
I have a suggestion of problem: find f(x) such that f(f(x)) = e^x
this could be interesting because it might motivate him to make a video on functional square roots in general
This can be done easier (as some of the suggestions below agree), multiply both sides by y' and integrate to get (1/2) (y')^2 + exp(y) = K, for some positive constant. Now you can separate variables to get: dy/Sqrt(K-exp(y)) = + or - sqrt(2) dx. Now make a substitution u = sqrt(K-exp(y)), which gives dy/u = 2du/(u^2-K), so we have 2du/(u^2-K) = + or - sqrt(2) dx, which we can easily integrate by writing 2/(u^2-K) = (1/sqrt(K)) ( 1/(u-sqrt(K)) - 1/(u+sqrt(K)) ), giving log() integrals. Finally solving back for y from u, gives the general solution for arbitrary constants K and C resulting from the last integration.
this is absolutely ridiculous. I'd like to understand how long it took you to figure this out. It certainly has to be longer than the length of this video.
When you are experienced with differential equations, these kind of things are standard problems that you can solve in a few minutes. As mentioned in other comments, this ODE has the form of Newton's 2nd law (acceleration proportional to force), and using conservation of energy immediately reduces it to a 1st order ODE (kinetic energy + potential energy = constant).
I was solving equations like this in another way:
1. The substitution is y'=p(y) => y'' = dp/dx = dp/dy * dy/dx= p' * y' = p'p
2. Thus, the equation is p'p + e^y = 0, it is a separable equation
3. y = -ln (1/2 p^2)
4. p = y' => p = dy/dx => dx = dy/p
5. (from 3) dy = - 2/p
6. dx = -2/(p^2)
7. x = 2/p
8. p = 2/x
8. substitute p in (3) : y = - ln (2/(x^2))
Simple
i found an imaginary solution Y= ln(-2 x^-2) but I'm not sure how to turn that into a real solution lol
So where did x enter the equation? you just declared y to be dependent on x with no explanation - so I asked watching.
0:02 loved that second 😅.