Well this is kind of proof method used in calculus courses. But I think it's real analysis course the proof should be in terms of sequential convergence. But very clear and nice explanation in your videos.
the main thing I always found confusing about εδ proofs is how "backwards" they feel, in the sense you have to be veeery careful about what things you assume in what order, and it's easy to make mistakes. I'm leaving a way I like to do it, in case others find it easier too: I find doing εδ proofs easier to do via contraposition, i.e. we assume 0 < ε ≤ |f(x)-f(c)| instead and compute a suitable δ directly. (|f(x) - f(c)|≥ε ⇒ |x-c|≥δ) ⇔ (|x-c| 0 (the rest of D) let ε > 0, let x ∈ D, assume ε ≤ |√x - √0| ∴ 0 < ε ≤ |√x - √c| = |√x - √c| let's multiply by |√x + √c| ≥ 0: ε|√x + √c| ≤ |√x - √c|·|√x + √c| = |√x² - √xc + √xc - √c²| = |x - c| ε(√x + √c) ≤ |x - c| , but since √x and √c are both positive, we know ε√c ≤ ε√x + ε√c, ergo ε√c ≤ |x-c| let δ = ε√c, then via contraposition, again: ... ( |x - c| < δ ⇒ |√x - √c| < ε ) ☐ I mean it's the same steps, with the only difference being that we don't need to work with *strict* inequalities (which makes it easier, imo) as much (besides making sure the δ we get is infact >0) and that we can compute δ directly instead of needing to assume |x-a| to be strictly less than a unknown δ, it's a tiny bit less mentally taxing I think, yet I never see εδ-proofs done via contrapos., can anyone tell me why?
That’s a great question! It would be a little difficult to explain here in a comment without math notation, but I will definitely make some lessons on the topic. For an example, consider the Piecewise function f(x) = -1 for all negative x and f(x) = 0 for all nonnegative x. It is not continuous on its domain because it is not continuous at x = 0, where the function’s values jump from -1 to 0. You could take epsilon equal to 1/2 for a counter example, then showing that no matter what delta you choose, not all x within delta of 0 (the point of discontinuity) have images within 1/2 of f(0). This is because some of the x values within delta of 0 will be negative, and thus will have images of -1, which are more than 1/2 away from f(0)=0.
let ∂>0 be given... ∂/√c < ∂ for c > 1 it is true, but what if our point is c=1/4=0.25 then ∂/√c= ∂/√(1/4)= ∂/(1/2)= 2∂ > ∂ so your observation is a false premise and that is why we need to use ∂=ℇ√c.
Legit dont know what I'd do without these videos. Real analysis would be kicking my ass rn if it weren't for you
Glad I can help! Thank you for watching and good luck!
you are 30x better than every upper division college professor ive had, and they get paid thousands. American education system !!!
This video is just brilliant.crisp and to the point
Thank you! Check out my analysis playlist if you're looking for more: ua-cam.com/play/PLztBpqftvzxWo4HxUYV58ENhxHV32Wxli.html
Well this is kind of proof method used in calculus courses. But I think it's real analysis course the proof should be in terms of sequential convergence. But very clear and nice explanation in your videos.
the main thing I always found confusing about εδ proofs is how "backwards" they feel, in the sense you have to be veeery careful about what things you assume in what order, and it's easy to make mistakes. I'm leaving a way I like to do it, in case others find it easier too:
I find doing εδ proofs easier to do via contraposition, i.e. we assume 0 < ε ≤ |f(x)-f(c)| instead and compute a suitable δ directly. (|f(x) - f(c)|≥ε ⇒ |x-c|≥δ) ⇔ (|x-c| 0 (the rest of D)
let ε > 0, let x ∈ D, assume ε ≤ |√x - √0| ∴
0 < ε ≤ |√x - √c| = |√x - √c|
let's multiply by |√x + √c| ≥ 0:
ε|√x + √c| ≤ |√x - √c|·|√x + √c| = |√x² - √xc + √xc - √c²| = |x - c|
ε(√x + √c) ≤ |x - c| , but since √x and √c are both positive, we know ε√c ≤ ε√x + ε√c, ergo ε√c ≤ |x-c|
let δ = ε√c, then via contraposition, again:
... ( |x - c| < δ ⇒ |√x - √c| < ε ) ☐
I mean it's the same steps, with the only difference being that we don't need to work with *strict* inequalities (which makes it easier, imo) as much (besides making sure the δ we get is infact >0) and that we can compute δ directly instead of needing to assume |x-a| to be strictly less than a unknown δ, it's a tiny bit less mentally taxing I think, yet I never see εδ-proofs done via contrapos., can anyone tell me why?
Brilliant video, thank you so much!
My pleasure, thanks for watching!
Hi, thanks for the video. For the limit at 0, weren't we supposed to prove only the right hand limit, since the left hand limit doesn't exist?
for a limit to exist, it must exist on both sides
If we apply the epsilon-delta argument considering the domain, is x^1.5 differentiable at x=0?
What is the software, please? Very Nice the video!
... perhaps a dumb question, but how do you use the epsilon delta definition to prove that a function is NOT continuous on its domain?
That’s a great question! It would be a little difficult to explain here in a comment without math notation, but I will definitely make some lessons on the topic. For an example, consider the Piecewise function f(x) = -1 for all negative x and f(x) = 0 for all nonnegative x. It is not continuous on its domain because it is not continuous at x = 0, where the function’s values jump from -1 to 0. You could take epsilon equal to 1/2 for a counter example, then showing that no matter what delta you choose, not all x within delta of 0 (the point of discontinuity) have images within 1/2 of f(0). This is because some of the x values within delta of 0 will be negative, and thus will have images of -1, which are more than 1/2 away from f(0)=0.
@@WrathofMath Thank you for replying! It makes sense now.
Can we prove that it is uniformly continuous with the same method?
You’re the best❤
Thank you! 😁
If x and c are smaller than 1 then the inequality at 7:24 holds ?
same, I wonder what if sqrt(x)=1 and sqrt(c)=0.001, then right side would be much larger than left side
@@nguyen9670 no actually it's all right . I got confused about what he said . What he has written is right . If you have doubts , ping me up .
@@ChaloGhat Thank actually I just got it now some moment later haha.
would it be ok to say delta/sqrtc < delta and then set delta=epsilon?
let ∂>0 be given...
∂/√c < ∂ for c > 1 it is true,
but what if our point is c=1/4=0.25 then ∂/√c= ∂/√(1/4)= ∂/(1/2)= 2∂ > ∂
so your observation is a false premise and that is why we need to use ∂=ℇ√c.
@@tomctutor how about let delta=|sqr(x)-sqr(c)|*epsilon?
well those are some interesting captions...
I've never read the auto-generated captions, I think I'll keep it that way haha!
Thank you!
Glad to help!
Nice
Thanks!
Please Be My Prof for rest of my degree
Wow!