this limit has a dangerous solution!!

Поділитися
Вставка
  • Опубліковано 25 чер 2024
  • 🌟Support the channel🌟
    Patreon: / michaelpennmath
    Channel Membership: / @michaelpennmath
    Merch: teespring.com/stores/michael-...
    My amazon shop: www.amazon.com/shop/michaelpenn
    🟢 Discord: / discord
    🌟my other channels🌟
    mathmajor: / @mathmajor
    pennpav podcast: / @thepennpavpodcast7878
    🌟My Links🌟
    Personal Website: www.michael-penn.net
    Instagram: / melp2718
    Twitter: / michaelpennmath
    Randolph College Math: www.randolphcollege.edu/mathem...
    Research Gate profile: www.researchgate.net/profile/...
    Google Scholar profile: scholar.google.com/citations?...
    🌟How I make Thumbnails🌟
    Canva: partner.canva.com/c/3036853/6...
    Color Pallet: coolors.co/?ref=61d217df7d705...
    🌟Suggest a problem🌟
    forms.gle/ea7Pw7HcKePGB4my5

КОМЕНТАРІ • 180

  • @Double_U_tau_Phi
    @Double_U_tau_Phi 7 місяців тому +68

    This was so dangerous. I hurt my eyes.

    • @supergamer8030
      @supergamer8030 6 місяців тому +2

      Hope they're better now

    • @sk4lman
      @sk4lman 6 місяців тому +2

      Take a long look at his shirt, that's a sight for sore eyes! ❤

  • @romajimamulo
    @romajimamulo 7 місяців тому +37

    I think it's way easier to prove bounded above if you use 2. For proof by contradiction, say that a(m+1,n) is the earliest M that is greater than or equal to 2. (We can show it can't possibly be m=1 trivially)
    Then a(m,n) must be less than 2.
    So a(m,n)+1/n is less than 3, regardless of what n is.
    So sqrt(a(m,n)+1/n) is less than the square root of 3. But square root of 3 is less than 2. But the expression we built is a(m+1,n), which we said is greater than or equal to 2.
    So we've encountered a contradiction

  • @allanjmcpherson
    @allanjmcpherson 7 місяців тому +22

    I'm really excited for the Dominating Convergence Theorem. I've heard it used as an explanation for why we can change the order of integration, but I've never heard it explained. I tried looking it up, but deciphering what I found would have taken more time than I can devote to the subject.

    • @Adam-rt2ir
      @Adam-rt2ir 6 місяців тому +1

      I like to state a version of dominated convergence theorem: If a_n, b_n, f_n, a, b are integrable, a_n b and so does their integrals, f_n -> f then the integrals of f_n converge to the integral of f. So its a bit like squeeze theorem from calculus.

    • @leif1075
      @leif1075 6 місяців тому

      Why was it poorly wrotten..why are si many math articles stupidly or impenetrable written?

  • @noahprentice751
    @noahprentice751 7 місяців тому +12

    Great video, super excited for Dominated Convergence Theorem!

  • @amoswittenbergsmusings
    @amoswittenbergsmusings 7 місяців тому +30

    This was a very beautiful and clear exposition! I found it fascinating that the reason phi plays a role has little to do with its actual value but everything by its magical reduction of its square to a linear expression in phi. That's exactly what we need to get the crucial inequality in the induction step. I think it can do with some deep thinking about why this is needed. Perhaps by trying to follow the same route for variations of the problem and seeing where it fails.

    • @leif1075
      @leif1075 6 місяців тому

      I dont think part was that clear honestly..why bother with the a MN stuff and not just rewrite the expression the limit as S and S equals sqrt 1/n plus S..then swuare both sides and toubjave the quadratic with the golden ratio..mich faste rand more efficientl and clearer no? Indont see what new insights the a_mn and covenrgence stuff gives you..does it actually tell you anything new or additional insights?

    • @GhostyOcean
      @GhostyOcean 6 місяців тому +2

      ​@@leif1075in order to do that, you must know the limit exists in the first place. That's what we are trying to prove, so we are proving something we assumed true…

    • @GhostyOcean
      @GhostyOcean 6 місяців тому +2

      I think using an upper bound of 2 is easy. Obviously a(1,1)=1

  • @TheLuckySpades
    @TheLuckySpades 7 місяців тому +16

    You swap the order of your indices as you start the induction
    You state you need to prove it for a_m,1, then use a_1,k for the rest, while treating k with the properties for m (i.e. a_1,(k+1)=sqrt(1+a_1,k))

    • @Budha3773
      @Budha3773 6 місяців тому

      But that is the correct property to prove the limit

    • @TheLuckySpades
      @TheLuckySpades 6 місяців тому +1

      @@Budha3773 it's the right property (as in the one we want to use) on the wrong index, the second index tells you what is under the fractions and the first one tells you how deep to nest them

    • @leickrobinson5186
      @leickrobinson5186 6 місяців тому

      You found today’s Michael Penn error! Congrats!

    • @spiderjerusalem4009
      @spiderjerusalem4009 6 місяців тому

      it's as if there's no video of his that has no error, but hey, ye know, everyone has their own grubby schedules

  • @dlevi67
    @dlevi67 7 місяців тому +5

    Very much looking forward to the video on the Dominated Convergence theorem!

  • @patrickhickey7673
    @patrickhickey7673 6 місяців тому +3

    I love these kinds of “nested” limits - looking forward to the dominated convergence theorem video!

  • @kevinmartin7760
    @kevinmartin7760 7 місяців тому +3

    That induction seemed incomplete.
    He started off on a base case m=1 (that is, using 1 for the first subscript) but then the induction step was done on the second subscript, that is, the second step proves a_1,(k+1) < phi if a_1,k < phi, and says nothing about a_m,k where m > 1.
    You don't need induction for the m=1 with increasing k series at all, each value is a single term sqrt(1/k) which is plainly less than phi (and in fact = 1 and consider:
    a_(m+1),k
    = sqrt(1/k+a_m,k) [by the recursive definition of a_m,n]
    = 1/k for natural number k]
    < sqrt(1+phi) [by the hypothesis for the induction step]
    = phi [by the properties of phi]

    • @TheLuckySpades
      @TheLuckySpades 7 місяців тому +1

      He got the order of indices swapped as he started the induction, note that the steps he does are the same you do here (except he already did that we can make the second index 1 by the note before the induction)

  • @Neodynium.the_permanent_magnet
    @Neodynium.the_permanent_magnet 6 місяців тому +2

    Initially, I had a dangerous intuition that this limit should be zero.

  • @Jonasz314
    @Jonasz314 6 місяців тому +2

    I'm not entirely sure why we introduce a sequence. We can simply write the infinite nested radical expression Xn = sqrt(1/n + Xn), and immediately get to Xn = 1/2 * (1 + sqrt(1+ 4/n))

    • @nitayderei
      @nitayderei 6 місяців тому +1

      You can only do it once you show the limit converges. He actually does what you recommend starting from about 9:45.
      1-1+1-1+1-1+... = L
      -L = -1+1-1+1-... = -1 + L => L = 1/2 by the same logic alone.

    • @bcwbcw3741
      @bcwbcw3741 6 місяців тому

      Which has the wrong limit as n goes to infinity. X(inf)=1. The other root, Xn=1/2*(1-sqrt(1+4/n)) does converge to the correct value. Note also that this equation is weird for 0

  • @boubidebibou4547
    @boubidebibou4547 6 місяців тому

    Finally a video on dominating convergence theorem ! Impatient to see it soon !

  • @a_minor
    @a_minor 6 місяців тому +2

    I kind of understand that you defined and explained all of those terms for the sake of learning but this limit could have been solved in an easier way.
    say the expression of 'n' inside the limit to be 'x';
    sqrt(1/n + sqrt(1/n + sqrt(1/n + .....))) = x ------ (i)
    since this expression goes till infinity, you could just drop the first sqrt() term and still call it x since it wouldn't affect the series;
    + sqrt(1/n + sqrt(1/n + .....)) = x ------- (ii)
    now substitute this seemingly new value of x (eq. ii) in eq. i;
    sqrt(1/n + x) = x
    solving the quadratic so formed after squaring both sides gives us;
    x = [1 ± sqrt(1 + 4/n)]/2
    taking lim n → ∞ both sides we end up with lim n → ∞(x) which is the limit in question;
    lim n → ∞ (x) = lim n → ∞ (1 ± sqrt(1 + 4/n))/2
    solving the limit(highest power approximation), we get;
    lim n → ∞ (x) = (1 ± 1)/2
    neglecting the root with the (-) sign since the limit cannot be zero as the expression of 'n' inside the limit is forever increasing;
    lim n → ∞ (x) = 1; hence the answer.

  • @DrPuschel
    @DrPuschel 7 місяців тому

    Im hyped for the future video!

  • @pauselab5569
    @pauselab5569 6 місяців тому

    there is also a really nice explanation that 3b1b had a few years ago. the negative root is unstable and the positive root is stable which means that most values converge towards the positive one rather than the negative one.

  • @myrrito
    @myrrito 6 місяців тому

    Lovely limit. I learned something today.

  • @bubbotube
    @bubbotube 6 місяців тому

    Very insightful video

  • @moonshine7753
    @moonshine7753 7 місяців тому +18

    I feel like this could have been solved by using the other favorite trick.
    Since An = sqrt(1/n + An), you can easily solve for An and get the same result without using m or two limits.
    Of course, then you wouldn't be able to show that the two limits cannot be swapped because you wouldn't have them.

    • @micah6082
      @micah6082 7 місяців тому +23

      The way he approached it in the video is the most rigorous, because these infinite roots are sequences by definition. So by assuming An = sqrt(1/n + An) you're assuming that the sequence converges, which he did not assume and thus had to show. For instance, consider a_{n+1} = sqrt(n^2 + a_n) suddenly you're dealing with infinity inside and outside the limit!

    • @jumpman8282
      @jumpman8282 7 місяців тому

      𝐴(𝑛) = √(1 ∕ 𝑛 + 𝐴(𝑛)) ⇒ 𝐴(𝑛) = 1 ∕ 2 ± √(1 ∕ 4 + 1 ∕ 𝑛).
      Thus, lim 𝑛→∞ 𝐴(𝑛) is either 0 or 1.

    • @moonshine7753
      @moonshine7753 7 місяців тому

      @@micah6082 Ah, fair. For some reason I just forgot that you need to show that it converges for that to work.

    • @__christopher__
      @__christopher__ 7 місяців тому

      ​@@moonshine7753on the other hand, you can use that method to figure out what it converges to if it converges, and then use that as upper bound on the convergence proof. Or more generally, prove that the difference to it converges to zero.

    • @ernestomamedaliev4253
      @ernestomamedaliev4253 7 місяців тому +2

      @@micah6082 To be honest, I don't get why someone is assuming that the sequence converges in order to make that statement. We are just "defining" a variable A_n = sqrt(1/n + sqrt(...)). Am I missing something? Then A_n can be anything depending on n...

  • @meeheal
    @meeheal 6 місяців тому

    Fantastic!

  • @davidcroft95
    @davidcroft95 6 місяців тому

    We are finally getting the Dominated Convergence theorem video!!!!! 🙏🙌

  • @VaradMahashabde
    @VaradMahashabde 7 місяців тому

    Well the dangerous limit order is sensitive to our definition of a_mn. In the solution, we effectively have a_0n = 0, but we could also have a_0n = 1 which would give the same limit in the normal order, but the limit 1 in the dangerous order.
    So the danger of the danger order depends on the initial value of a_mn. Actually, for all a_0n > 0, we would get the dangerous order value as the infinite square root of a_0n = 1, which is the same as the normal order

  • @pizza8725
    @pizza8725 Місяць тому +1

    You could eazly hust set y=the function and by some simple simplification we would get (sqrt((n+4)÷n)+1)÷2 and bc sqrt is continois we could put the limit inside and the limit of (n+4)÷n is 1 so we wpuld get (1+1)÷2=2÷2=1

  • @marceloantunesarraespiston1965
    @marceloantunesarraespiston1965 6 місяців тому

    Hi, I´m a big fan from Brazil. How can I buy your merch?

  • @juha-mattiperkkio7646
    @juha-mattiperkkio7646 6 місяців тому

    How about a video on Henstock-Kurzweil integral and Lebesgue Dominated Convengerce Theorem without Lebesgue-integration, measures and all that?

  • @steffenbendel6031
    @steffenbendel6031 6 місяців тому +1

    My favourite trick is to go to the end of the video and get the result.

  • @kyokajiro1808
    @kyokajiro1808 5 місяців тому

    if you use y=sqrt(x+y) you would get y^2-y-x=0 in turn (1+-sqrt(1+4x))/2 which at x=0 is 1 or 0, but if you look at the original graph of the question, only the top half for x>0 exists (unless you take imaginary numbers into account) so through that you can also come to the conclusion its 1 but is almost able to be mistaken for 0 as well

  • @spiderjerusalem4009
    @spiderjerusalem4009 6 місяців тому +1

    to elucidate also the motivation. You want to find α such that it bounds the sequence.
    aₘ,ₙ ≤ √(1+aₘ₋₁,ₙ)
    ≤ √(1+α)
    but we want aₘ,ₙ ≤ α
    perhaps we could attain such value by equating them, or to put it simply
    √(1+α)=α
    which indeed is φ

  • @broucho
    @broucho 6 місяців тому

    Great but I did't saw where you got the gold number for phe from

  • @DJSchreffler
    @DJSchreffler 5 місяців тому

    If the expression evaluates to x, then x^2 = 1/n + x.
    Quadratic formula--and noting that we want n > 0--gives x = (1/2)(1 + 4/n)^(1/2)
    I suppose you do things to prove that the expression does evaluate to something.

  • @woody442
    @woody442 7 місяців тому

    Why can we not assume that we let m->∞ and n=m^2.
    What would happen to the limit then?

  • @lexyeevee
    @lexyeevee 6 місяців тому

    i think simply substituting n = ∞ is ill-defined in the first place - a 1/n term arbitrarily far along will also be nested in arbitrarily many square roots, and so the … is effectively hiding a (1/∞)^(1/∞) = 0⁰ term.

  • @jayathranps1319
    @jayathranps1319 6 місяців тому

    That quadratic has two solutions at n = infinity. 1 & 0

  • @jalureswara2719
    @jalureswara2719 6 місяців тому +3

    What if the nested term was alternating between + and - sqrt(1/n)? would that make the limit diverge? I guess by comparing it with 1-1+1-1+1... makes it diverge

    • @yurenchu
      @yurenchu 6 місяців тому

      The four complex values of the cycle that the iteration approaches, are at: e^(-i*2π/5) , e^(-i*π/5) , e^(i*2π/5) and e^(i*π/5) .
      Note that:
      c₁ = e^(-i*2π/5) = cos(2π/5) - i*sin(2π/5) = (0.3090169944...) - i*(0.9510565163...)
      c₂ = e^(-i*π/5) = cos(π/5) - i*sin(π/5) = (0.8090169944...) - i*(0.5877852523...)
      c₃ = e^(i*2π/5) = cos(2π/5) + i*sin(2π/5) = (0.3090169944...) + i*(0.9510565163...)
      c₄ = e^(i*π/5) = cos(π/5) + i*sin(π/5) = (0.8090169944...) + i*(0.5877852523...)
      Furthermore, note that for large value of n ,
      √(1/n + c₁) ≈ √(0 + c₁) = √(c₁) = c₂
      √(1/n - c₂) ≈ √(0 - c₂) = √(-c₂) = c₃
      √(1/n + c₃) ≈ √(0 + c₃) = √(c₃) = c₄
      √(1/n - c₄) ≈ √(0 - c₄) = √(-c₄) = c₁
      This explains why the iteration (at increasing values of m) eventually approximates the cycle (c₁ , c₂ , c₃ , c₄) .
      I don't have rigourous mathematical proof, but I guess that the larger the value of n, the closer to this cycle the iteration approaches.

  • @davidseed2939
    @davidseed2939 6 місяців тому

    this reminds me of comparing
    x⁰ as x-> 0 ans=1
    with 0^y as y-> 0 ans=0
    so 0^0 is undefined.
    also interesting to compare the results.of m=n-> inf with othwr ways eg n=2^m->inf

  • @mrphlip
    @mrphlip 6 місяців тому

    From the start, I thought that the "dangerous" trick was going to be jumping to the fixed-point trick too soon... ie instead of doing it carefully, as you do at 9:40, instead try to say something like...
    Define x = lim n->oo sqrt(1/n+sqrt(...))
    Therefore x = lim n->oo sqrt(1/n+x)
    And then you solve that and end up with x=0. Because doing this is incorrect as it subtly implicitly swaps the order of the limits.

  • @gp-ht7ug
    @gp-ht7ug 7 місяців тому

    Nice video. Never heard before about the change of the order of limits but it is very interesting

  • @zafiroshin
    @zafiroshin 6 місяців тому

    Beautiful

  • @gonzus1966
    @gonzus1966 7 місяців тому +5

    At around 9:20, why is A1,k+1 = sqrt(1+A1,k)? It seems you are using the recursive definition, but that was recursing on m, not n?

    • @TheEternalVortex42
      @TheEternalVortex42 7 місяців тому +6

      It's just a typo, he meant to write a_{k, 1}

    • @qm_turtle
      @qm_turtle 7 місяців тому +2

      He is making a slight mistake there with his indeces in the induction step. You basically just need to switch the indices-placement in there for it to be correct. Thus, it should read a_(k+1,1) instead of a_(1,k+1). And with that you are able to use the recursive definition as stated on the left side of the board.

  • @sushildevkota350
    @sushildevkota350 6 місяців тому

    Hey michael, can’t we just assume a number y= sqrt(1/n+sqrt(1/n+……))
    Then y= sqrt(1/n+y)
    So y^2 =1/n +y , so we get y=0,1 so limit doesnt exist. Cant we do this??

  • @barutjeh
    @barutjeh 6 місяців тому

    Hmm, what about the limit of a_{n,n}? Might work on it later.

    • @yurenchu
      @yurenchu 6 місяців тому

      I haven't been able to prove it (yet), but it appears that
      lim_[n→ ∞] a_{n,n} = 1 .
      It also appears that
      -log₁₀( a_{10ᵏ,10ᵏ} - 1 )∼k

  • @86congtymienbac80
    @86congtymienbac80 6 місяців тому

    Does the series a_n converge? I tried it, when n->infinity for a given m, it converges to 0. It seems I was wrong.
    When m=infinity.
    a_n=(1/n+a_n)^0.5 => (a_n)^2=(1/n)+a_n => a_n=1/2+1/2*(1+4/n)^0.5
    when n -> infinity.
    lima_n = 1/2+1/2(1+0) = 1

  • @valemontgomery9401
    @valemontgomery9401 6 місяців тому

    Isn’t it shown that the nested square root sum of 0 actually equals 1?

  • @vladpetre5674
    @vladpetre5674 6 місяців тому

    I get that it was proven that the limit when M goes to infinity exists (due to the sequence being increasing and bounded) but don't you have to prove that the limit when N goes to infinity also exists before setting it to the positive root of the Ln^2-Ln-1/n=0 and then just (due to continuity) calculate it to be 1?
    I think all we've proven is that IF the limit exists, then it is 1. Similarly to the all popular "if limit when n-> infinity of (1+2+3...n) exists then it is -1/12"

  • @MrJronson
    @MrJronson 6 місяців тому +2

    Very nice, I have to admit the ending result of the double limit not existing does not make sense to me (I do understand the logic behind what you say).
    If we start n and m at 1 and increment each at each step, it feels like this should reach an attainable answer, whether it's convergent or divergent... Definitely thrown me for a loop!

    • @yurenchu
      @yurenchu 6 місяців тому +4

      n and m are independent of eachother. Unless we'd define a relation between n and m. But there are many different choices for a relation between n and m ; the relation n = 1*m probably yields a different result of the limit than, say, n = sqrt(m) or n = 2^m .
      This is similar to the argument that
      lim_[a→0, b→0] a^b
      does not exist, because the value of this limit depends on the "route"/"direction of approach" towards (a,b) = (0,0) .

    • @MrJronson
      @MrJronson 6 місяців тому

      @@yurenchu thank you, that makes a lot of sense

    • @yurenchu
      @yurenchu 6 місяців тому

      ​@@MrJronson You're welcome!

  • @DR-tx3ix
    @DR-tx3ix 6 місяців тому +5

    Did I miss something? The usual approach is to set the infinite series equal to S, as in S = sqrt(1/n + sqrt( 1/n + ... )) , then square both sides to get S^2 = 1/n + S and solve for S using the quadratic equation.

    • @yurenchu
      @yurenchu 6 місяців тому

      Then you'll get _two_ distinct possible solutions for S. What he shows, is that (under his interpretation of the infinite expression) one of those possible solutions is valid, and the other is not.

    • @user-uf2uc3ce2r
      @user-uf2uc3ce2r 6 місяців тому +4

      Yes, but you can’t use this approach without proving the existence of the finite limit first. That is, before solving S^2 = 1/n + S, you should make sure that S itself is finite (i.e., the limit exists and is finite), otherwise the stated equation would be unjustified.

    • @leif1075
      @leif1075 6 місяців тому

      But I don't think that's right isbit. You can have a negstive sign in front of the square root and you are still taking the positive root..see what zi mean? I hope that's clear..so zero is also potentially a valid answer..

    • @leif1075
      @leif1075 6 місяців тому

      ​@@user-uf2uc3ce2rwhy would it he unjustified.just set S equals to the expression within the limit then tou can set it equal to S and solve for S..if it doesn't converge you'll get infinity as an answer when youbuse the quadratic formula..so indont see why this one totally.valod and accurate mathematically

    • @user-uf2uc3ce2r
      @user-uf2uc3ce2r 6 місяців тому

      @@leif1075 Thanks for the reply. I understand your reasoning but it doesn’t seem to be rigorous. Is there a proof that getting infinity (or rather division by zero) is sufficient for the series in question to diverge? To me, S^2 = 1/n + S is merely a trick which works only under certain assumptions (such as the limit being finite for example) which should be carefully considered.

  • @danielrybuk1905
    @danielrybuk1905 6 місяців тому

    Why isn't 0? Is there a better explanation then that in the video? I didn't get it

  • @LucasCAPS
    @LucasCAPS 6 місяців тому

    14:06 We get m × 0, whose limit is ∞ × 0, which can be any nonnegative real number (or infinity), so changing the order is not wrong, just useless.

  • @RexxSchneider
    @RexxSchneider 6 місяців тому

    This reminds me of all of the debate that occurs over the value of x^x when x = 0. If you look at x^y and take the limit(x^y) as x → 0, y → 0, you find that lim_x→0( lim_y→0(x^y) ) seems to be 1, while lim_y→0( lim_x→0(x^y) ) appears to be 0. It may be that lim_x,y→0,0(x^y) is not defined (as is the corresponding double limit in this video), or perhaps it might be 1 because that is most convenient given the behaviour of x^x. Personally, I prefer to think of it as indeterminate and hedge my bets.

  • @cparks1000000
    @cparks1000000 6 місяців тому

    What a cool, simple example of a function where interchanging limits changes the outcome. My favorite example is taking the function $f_n(x) = 1$ if $x \in [n, n+1]$ and $f_n(x)=0$ elsewhere. We then see that $lim_{n\to \infty} \int_{\mathbb{R}} f_n(x) dx = 0$ while \int_{\mathbb{R}} \lim_{n\to\infty} f_n(x)dx = 0$. It's also an easy way to remember that $\int_X \lim_{n\to\infty} f_n(x) dx \leq \lim_{n\to\infty} \int_X f_n(x) dx$ for any set $X$.

  • @kyutoryuashura3961
    @kyutoryuashura3961 6 місяців тому

    8:30, shouldn't it be a sub k,1 since we want to prove a sub m,1 for all integer m ?
    Nice video regardless!

  • @giuseppepapari7419
    @giuseppepapari7419 7 місяців тому +1

    8:19 - I would say a_{k,1}, not a_{1,k}. Or am I missing something?

    • @yurenchu
      @yurenchu 6 місяців тому

      No, you're not missing anything. He made a little error, the k (or k+1) is obviously supposed to substitute for m, not for n .

  • @rboyce1000
    @rboyce1000 4 місяці тому

    ok, now can anyone evaluate the limit of a(n,n) as n goes to infinity?

  • @assassin01620
    @assassin01620 7 місяців тому

    It's said to be so dangerous you could hurt yourself just by look- OWW!

  • @GreenMeansGOF
    @GreenMeansGOF 7 місяців тому

    I’m trying to figure out how to show bounded above directly and well, we have bounded above by the n=1 case, right? So that’s where the golden ratio comes from?

    • @Schadock_Magpie
      @Schadock_Magpie 7 місяців тому

      yes, if you look at 11:48, you can see L1=phy
      if you're trying to show some kind of series converge, trying to find a fixed point is usually a good start (you still need to prove the convergence, otherwise you'll be in trouble, some functions have a fixed point for series that are completely diverging, and you can probably find an example or too on this channel)
      Sorry for the broken english

    • @Schadock_Magpie
      @Schadock_Magpie 7 місяців тому

      To clarify this fixed point: you're looking to get a function to express a_n+1 as f(a_n) and then you try to solve x=f(x). If there is a convergence, a solution of this equation should be a valid limit. It's a bit old for me, there may be some case were it doesn't work, but it's easy to try (did someone ask if P=NP?)

  • @DSN.001
    @DSN.001 6 місяців тому

    what about limit as x aproaches infinity of (-1)^x?
    I know (-1)^x is such an implicit equation depending on x's property of being odd, even, irrational. But if you multiply these 3 types of numbers you may get infinity, in other words, infinity has EVEN divisors inside it, although this is not true for all infinities, but continious infinity does have. So the answer must be "1"?

    • @yurenchu
      @yurenchu 6 місяців тому

      lim_{x→ ∞} cos(x) = _Does Not Exist_
      lim_{x→ ∞} sin(x) = _Does Not Exist_
      lim_{x→ ∞} (-1)^x =
      = lim_{x→ ∞} (e^[iπ])^x
      = lim_{x→ ∞} e^[iπx]
      ... Euler identity: e^(iθ) = cos(θ) + i*sin(θ) ...
      = lim_{x→ ∞} cos(πx) + i*sin(πx)
      ... substitute t = πx ...
      = lim_{t→ ∞} cos(t) + i*sin(t)
      = _Does Not Exist_

  • @goodplacetostop2973
    @goodplacetostop2973 7 місяців тому +3

    16:24

  • @LOL320PL
    @LOL320PL 7 місяців тому +1

    Okay, if I understand the initial problem correctly, then we calculate the limit as n tends to infinity of a_{n,n} (since in the initial problem there is no "m"), but this raises a question -- why do we know that the first iterated limit of a_{m,n} is the correct one? As we have seen in the video the limit of a_{m,n} as (m,n) tends to (inf,inf) does not exist, so choosing different "paths to infinity" we can get different answers -- it was clear when we compared the two iterated limits, but why is it true that the path (n,n) -> inf gives the same limit as the path (m,n) -> (inf,n) -> (inf,inf)? I am a bit rusty when it comes to analysis as I haven't done anything analysis in quite some time, so this may be a stupid question but idk how to answer it

    • @yurenchu
      @yurenchu 6 місяців тому +1

      No, in the video he calculated the limit as n tends to infinity of L_n = a_{∞,n} , not of a_{n,n} .
      EDIT: In the initial problem there is no m, because m already "equals" infinity (hence the dot-dot-dot after the third + sign).

    • @christophniessl9279
      @christophniessl9279 6 місяців тому +1

      @@yurenchu any dot-dot-dot contains - rigorously written - some variable and a limit, so thare was an m from the beginning

    • @yurenchu
      @yurenchu 6 місяців тому +1

      ​@@christophniessl9279 There was no m, or at least no explicit m, as "m" was already set equal to infinity.
      Note that the "dot-dot-dot" does not have a defined standard mathematical meaning in this particular case. Michael merely chose one to define what the original expression (in his opinion) should mean.

    • @jimallysonnevado3973
      @jimallysonnevado3973 6 місяців тому +1

      To answer your question,
      We are interested in the limit
      lim n->inf sqrt(1/n+ sqrt(1/n+sqrt(1/n+...)))
      Which "roughly" means
      we take a "specific" n, do the infinite iteration, we get a number. We increase n, we do an infinite number of iteration and get another number and etc.. Finally we take the limit of those numbers.
      This is the interpretation of
      lim n->inf lim m-> inf {a_m,n}
      which will have a limit of 1.
      However, if we swap the limit.
      lim m->inf lim n->inf {a_m,n}
      it will mean we fix a finite number of iterations, we let n go to infinity. We get a number. We increase m (the number of iterations), let n go to infinity, we get another number. And finally take the limit as we increase in the number of iterations. In this case the limit is 0 because for any fix finite number of iterations, if we let n go to infinity each number we get is 0. Thus, we get a sequence of 0's.
      The interpretation of lim n-> inf{a_n,n} would be
      we do 1 iteration on the number 1. We do 2 iterations using the number 2, 3 iterations on number 3 and etc.. Then we take the limit. I'm not sure if it exist though but the sequence will be as follows:
      1, sqrt(1/2+sqrt(1/2)), sqrt(1/3+sqrt(1/3+sqrt(1/3))), sqrt(1/4+sqrt(1/4+sqrt(1/4+sqrt(1/4)))), etcc and taking the limit
      There are other ways to let m and n go to infinity. You can also let n and m increase not on a fixed rate. Each one will correspond to increasing the number of iterations and reciprocal of what number are we using.
      The double limit on the other hand does not exist because it roughly means
      Regardless how we increase m and n what will be the limit? Such thing should only exist if regardless of how we vary the number n and m we always get the same answer.

    • @LOL320PL
      @LOL320PL 6 місяців тому

      @@yurenchu Yeah I get what "..." means I just misread it I guess, cause I thought there was supposed to be n square roots. In that case everything is clear thanks for the response!

  • @mingmiao364
    @mingmiao364 6 місяців тому +1

    Brilliant counterexample! I understand that the main point of this video is to show the potential issue of exchanging limits. But going back to the original question, why is 1 the correct limit but not 0? That is, the limit at 0:00 before reformulanting it as the limit of a double-indexed sequence? Does that mean the original problem is ill-posed in some sense?

    • @jursamaj
      @jursamaj 6 місяців тому +1

      Exactly. He showed that the answer to the *original* question depends on how you solve it, not that his 1st answer is the *correct* answer.
      My take is that the correct way to solve it is to notice that as n-> ∞, 1/n->0. So the problem statement reducing to the infinite sum of zeros he mentions near the end. So 0 *is* the correct answer.

    • @iloveolego
      @iloveolego 6 місяців тому

      In contrast to previous I think it's obvious that principal way of evaluating initial limit is to somehow evaluate it at n=1 (not a trivial question), then at n=2, then 3 and so on and so forth
      But I totally agree that original problem is ill defined - in a way that limit in terms of n for infinite sequence S(n) eg infinitely nested roots actually is a double limit and DNE
      the second limit is the number of terms that approaches infinity is implicit and inferior to explicit reformulated statement as lim(m,n)
      And then order of limits is not a question of personal opinion but should be stated in problem definition

    • @yurenchu
      @yurenchu 6 місяців тому

      Clearly, Michael Penn's point is that
      lim_[n→ C] some infinite expression
      means
      lim_[n→ C] (some infinite expression)
      (note the parentheses!). That's why at 3:05 he is saying: "That's what's really happening here: that the m limit is _within_ the n limit."
      In other words, the expression B(n) = √(1/n + √(1/n + √(1/n + ...))) already involves one limit (not a double limit!) that must be evaluated first, before we can determine lim_[n→ C] B(n) .

    • @yurenchu
      @yurenchu 6 місяців тому

      ​@@jursamaj Nope. According to Michael, the first answer "1" is the correct answer, and the answer "0" which we get by _changing_ the order of limits, is incorrect. Just listen to him explaining:
      12:45 : "So that's our _final_ answer for our limit: it is equal to 1. (So, now, let's look at a little 'danger' before we finish the video.)"
      14:24 - 14:38 : "And so... what this really means... This doesn't add any sort of question as to [points to original problem expression] 'maybe this could be zero, maybe this limit here could be zero'; what it _does_ is it shows that you cannot always change the order of the limits."
      14:38 - 14:54: "So, the way we did it at first was _proper_ , we calculated this limit _without_ changing the order of limits. If instead, we had tried to use a shortcut and changed the order of limits, we would have gotten the _wrong_ value, we would have gotten this value of 0 ."

  • @cicik57
    @cicik57 7 місяців тому +1

    hm it solves very vell and simple,
    if you make it into x = √(x+1/n) you will get x²-x-1/n = 0 where x = 1+-√(1+4/n)/2 and if n -> inf, x = (1+-1)/2

    • @bsmith6276
      @bsmith6276 7 місяців тому +1

      Useful trick if you are taking some sort of timed exam, but not it's rigorous. The video presents a fully rigorous solution.
      Using that trick you are assuming that the expression converges, without proving the convergence holds.

    • @cicik57
      @cicik57 7 місяців тому

      @@bsmith6276 shouldnt it convetge if 1/x goes to 0?

  • @chengningloong7691
    @chengningloong7691 6 місяців тому

    I think a better expression should be lim n ->infinity lim m ->n a_{m,n} . Because the “ lim n -> infinity lim m -> infinity a_{m,n} “ seem like m and n are independent. But m should be growing in the same speed of n, in order words they are dependent. Please correct me if I misunderstood the question 🤔️

    • @TheLuckySpades
      @TheLuckySpades 6 місяців тому +1

      Since the initial equation already had the infinite expression m going to infinity is equivalent to the initial expression
      If it initially only had n nested roots, then your version would be correct

    • @chengningloong7691
      @chengningloong7691 6 місяців тому

      thanks for your reply. @@TheLuckySpades

  • @lacroixemmanuel9684
    @lacroixemmanuel9684 7 місяців тому

    I have a trouble to understand that lim (sqrt(1/n +...+sqrt(1/n)) tends to 0.
    To my understanding , if 1/n tends to 0, sqrt(1/n) > 1/n , therefore adding 1/n gives 1/n + sqrt(1/n) > 2/n . At infinite limit, i would say it tends to 1
    Is there something I miss ?

    • @zunaidparker
      @zunaidparker 7 місяців тому +1

      So if you have 2/n and n tends to infinity what do you have?

    • @lacroixemmanuel9684
      @lacroixemmanuel9684 7 місяців тому

      @@zunaidparker i understand that lim(2/n) IS O when n tends to infinity. But thé expression 1/n+sqrt(1/n) IS greater and this does not mean convergence to zero, i Guess.
      I would agree to a limit equal to O when n tends to infinity if 1/n+sqrt(1/n) is lesser than 2/n. If I am wrong, I would appreciate an explanation.

    • @TinySpongey
      @TinySpongey 7 місяців тому

      It's the order of limits that is important here. If we fix m to any finite value and let n tend to infinity we get a sequence of all zeroes in nested roots which is itself zero. A sequence of all zeroes for any finite value by definition is zero.

  • @ArnaldoMandel
    @ArnaldoMandel 6 місяців тому

    No need to solve the quadratic at the end. Just take the limit.

  • @Windprinc3
    @Windprinc3 6 місяців тому +1

    Why could we not just assign x to be the infinite sum, square both sides to get x^2 = x + 1/n and solve for x using the quadratic formula? We'd end up with x = 1/2 (1 +/- sqrt (1 + 4/n)). Now take the limit of x as n tends to infinity, and we get x = 1 or x = 0. From the assumption that n > 0, we can discard the x = 0 value, leaving only the x = 1 solution.
    I get the process of creating the a(m, n) term and using that to arrive at the solution, but is it necessary?

    • @christophniessl9279
      @christophniessl9279 6 місяців тому

      Well, the sequence b_n = 1/n has also only positive values, but we cannot disregard the limit 0, as 0 is in the closure of the nonnegative numbers ;->

    • @ldx8492
      @ldx8492 6 місяців тому

      I believe the process must be in place to be rigorous in solving for "x". I believe that you are allowed to do algebra and solve for "x" from x^2 = x + 1/n if and only if x is a convergent sequence. You are not allowed to do algebra on a recursive formula if the "x" diverges as it would be meaningless.
      Take this other example x = (n + (n + (n + ...)^2)^2 )^2, you can then write x = (n + x)^2, but you are not allowed to expand the quadratic term because you would then do algebra on divergent sequence x!

  • @warrickdawes7900
    @warrickdawes7900 6 місяців тому

    I was getting phi vibes when I recalled that phi is the continuing fraction of 1+1/(1+1/(1+1/ ...)

  • @geechan4744
    @geechan4744 6 місяців тому

    Similar to limit as x goes to zero of root(x+root(x+…)) ?
    Looks like ZERO

  • @jcfgykjtdk
    @jcfgykjtdk 6 місяців тому

    1/2

  • @Alan-zf2tt
    @Alan-zf2tt 7 місяців тому

    Okay - going into deep waters here - or so it feels - here goes. And using notation that I hope makes sense where a(mn | n ) and a(mn | m) just means which limit is taken first.
    And we have (subject to interpretation)
    limit of a(mn | n first) = 1 and limit of a(mn | m first) = 0 as explained in Michael's video can we choose epsilon and delta and N so that the difference between them is acceptable?
    It is a bit like limit of a sum containing sinusoidal behavior but with a contrived convergence - maybe even existence of a fuzzy limit given two precise limits?
    Fuzzy limits?

    • @toddtrimble2555
      @toddtrimble2555 6 місяців тому +1

      It's a little hard to parse what you're saying, but there could be some interesting questions lurking. For example: does the limit of diagonal terms a_{nn} exist? (I don't know!) Or how about the limit of terms a_{2n, n}, or of terms a_{n, 2n}? There are all sorts of paths through the square infinite grid indexed by pairs (m, n), and it's conceivable that you could get all sorts of different limits as you take various paths "out to infinity".
      For example, for the much simpler expression a_{mn} = m/n, any nonnegative real number r is a limit along a suitable path, for example by taking the limit over n of terms a_{mn} where m = integer part of r times n.

    • @Alan-zf2tt
      @Alan-zf2tt 6 місяців тому

      @@toddtrimble2555 agreed 100% my friend. My thinking processes are a bit rambling at times and I do try to make it coherent so less than ideal phrasing continuing ...
      + are limit definitions rooted in things from centuries ago?
      + what about bifurcation - some go wild some go oscillatory some go stable but most (all?) exhibit nested behaviors - the seed is within the seed
      + should limits as a theme be brought into 21st C to cater for things observed in bifurcations?
      + yes on two or more variables - which orderings converge, diverge, fuzzy converge, fuzzy diverge and other behaviors
      + naively: limit behaviors - if some do not absolutely converge to a limit but oscillate between a finite set of fixed values how does math of limits handle that and what are consequences arising
      + naively: scaling - something about epsilon delta means some arbitrary values may be chose - but what if arbitrary value accepted confident intervals around, say, a mean point? example a(mn | m) converges to 1 whereas a(mn | n) converges to 0 so choose epsilon = 0.5 with confidence interval of, say, plus or minus 1
      ? relativistic convergence ? epsilon with confidence intervals, finite or infinite fuzzy convergence without absolute convergence. possible example a sine component something like sin(n + 1/(pi))
      Perhaps convergence of the above are already dealt with when dealing with complex numbers, quaternions or other 'onions?
      - shrug -
      EDIT: where I put Confidence Interval or CI perhaps I should have put error bound

    • @toddtrimble2555
      @toddtrimble2555 6 місяців тому

      @@Alan-zf2tt Too many questions going in too many directions to really answer in a comment, but according to my poor "mathematicians' history", the epsilon-delta definition of limit was clearly and widely understood only in the 19th century, and it took a lot of work and conceptual analysis to get to that point. (The sometimes unanticipated behavior of Fourier series really forced the mathematicians of that time to finally come to grips with this.) Before that "age of rigor" [as exemplified for instance in the writings of Weierstrass], people's intuitions about calculus were typically guided by not-yet-rigorized notions of infinities and infinitesimals, which *can* be made rigorous, but the rigor had to await 20th century developments. You can find some precedents of the limit notion already in Archimedes, who in fact was way ahead of his time, in restricted contexts such as computing the area of a circle, but he didn't have the language of functions or anything like that to give the notion full justice. Eudoxus is another Hellenic mathematician who is sometimes associated with inchoate ideas of limits.
      Anyway, the notion of limit is still extremely fundamental and relevant in the 21st century. You absolutely need it in order to give crisp, clear, definite descriptions of the themes you are touching on, including chaos and stability phenomena and behavior near fixed points. I say "crisp" as a counterweight to your "fuzzy"; in spirit, there is nothing fuzzy about the notion. (There is such a thing called "fuzzy mathematics", but to set contexts for the scope of that would take us far too afield.)
      Notions of limits and convergence work pretty much the same way for other number systems that extend the real numbers, such as complex numbers and quaternions and octonions. In those contexts, you can define the distance between two complex numbers or between two quaternions, etc., and it all starts there. But you can generalize considerably: topology, the study of topological spaces, developed in order to give a wide, abstract, and flexible theory for dealing with very general notions of "continuous functions", in a way that goes way, way beyond the epsilon-delta style definitions that appeal to notions of distance [as abstracted in the notion of metric space]. There is soooo much to say.
      I just want to end by saying that lot of undergraduate mathematics training is learning how to be precise and how to construct rigorous arguments and how to get control over, how to "tame" mathematics, just as a sculptor or musician needs to gain mastery using his/her chosen instruments and tools. The videos of Penn are good in their way, and he does pay quite a bit of attention to such matters of precision and rigor. Mathematicians take off on virtually unbounded flights of fancy and imagination, but there is always the underlying discipline, and unrelenting obeisance to playing by the carefully specified rules of the game -- else the whole enterprise would come crashing down.

    • @Alan-zf2tt
      @Alan-zf2tt 6 місяців тому

      @@toddtrimble2555 thank you for sharing your views. I don't know too much about "whole enterprise would come crashing down" bit tho.
      I suppose math education is presently a big event in mathematics and in society. And maybe these topics suggest that there are needs for something like Category Theory?

    • @toddtrimble2555
      @toddtrimble2555 6 місяців тому +1

      @@Alan-zf2tt I'm glad you bring up category theory; it's hard for me to imagine where I'd be without it, and more and more people are coming to appreciate its fundamental place. A friend of mine, Eugenia Cheng, has recently written a very nice book on category theory for people who may not have much mathematical background, but which can be read with pleasure by those with a lot more background: The Joy of Abstraction. Have a look!
      When I said "crashing down", all I meant is this: math is a very tall tower, with theorems stacked on top of theorems on top of theorems, the chains of inferences reaching into the hundreds and thousands. Only with strict controls on precision of language and reasoning can the tower be stable and strong. I'm probably not saying anything you don't know. But this is one of the big tasks for the undergraduate math major: learning, through a lot of steady practice, the art of definition and proof.

  • @ianfowler9340
    @ianfowler9340 7 місяців тому +1

    I have a general question about the wording in a Proof by Mathematical Induction - not just about this proof. When I taught Induction in Grade 13 high school I would always write some like this for the assumption: "for 'all' k" or "for 'every' k" whereas Michael writes "for 'some' k" . At the end of the day, these seem to be equivalent to me - at least in this context. But maybe I am missing a subtle difference? Can someone give some help? Thanks in advance.
    i.e. P(k) ===> P(k+1) for all Natural Numbers k >1

    • @xizar0rg
      @xizar0rg 7 місяців тому

      It's the difference between using a "Strong Induction Hypothesis" (**for all** k less than some fixed N) vs a "Weak Induction hypothesis" (**there exists** an integer k, etc.). Strong induction always works, but an example of where it is especially needed when you have a sequence which requires more than just the immediate antecedent. (Perhaps your sequence A_n = A_(n-1) + A_(n-2) or something similar... Fibonacci is an example of such a sequence.)
      In this particular video, each A_n depends only on the A_(n-1)th term, so a weak induction hypothesis is fine.

    • @toddtrimble2555
      @toddtrimble2555 6 місяців тому +1

      @@xizar0rg No, this is not the issue at hand. First of all, the distinction between "weak induction" and "strong induction" is mostly pedagogical, because proving a strong induction step of the type "for all k, (\forall {m: 1\leq m \leq k} P(m)) => P(k+1))" is logically equivalent to proving a weak induction step of type "for all k, Q(k) => Q(k+1)" where Q(k) denotes the formula \forall {m: 1 \leq m \leq k} P(m). It makes no difference which formulation you use, although most people would find so-called strong induction, in its typical usages, easier to follow than they would this reformulation.
      The bottom line is that the statement of the induction principle (weak, strong, whatever), namely that for any predicate P on the natural numbers, [P(0) /\ (forall k) P(k) => P(k+1)] => (forall n P(n)), involves universal quantifiers throughout. So did Michael Penn make a mistake when he used the word "some"?
      I'd say: not really. In that application of induction, he wasn't going to prove that something exists. Let me put it this way: to prove a statement of the form "for all k, P(k) => P(k+1)", we typically invoke verbiage like "let k be any [particular but arbitrary] natural number. Then, under the assumption P(k), we prove P(k+1) is a consequence, as follows". Now it's very easy in colloquial English to slide from "let k be any [old] natural number and assume so and so" to "let k be some natural number and assume so and so", but this usage of "some" is not the same usage as when we assert "there exists some natural number k such that blah blah", an existential statement. It's a good question you ask, ianfowler9340, and shows up some of [ha!] the trickiness of mathematical English.
      There is other related trickiness, but I'll stop here.

    • @christophniessl9279
      @christophniessl9279 6 місяців тому +1

      I kind of disagree with the others; there is in fact no diference between strong or weak induction hypothesis: If we want to prove a proposition A(n) with a variable n where n is a natural number and lets say A(1) is true, then there are only two cases: either A(n) is true for all n or there are somee natural numbers for wich that statement is false. hence there is a smallest n_0 for which A(n_0) is a false statement. Anf if we can show that we can conclude from the fact that A(n) is true for n < n_0 that A(n_0) is also true we have our desired contradiction, and we know that only the first case needs to be considered.
      If we can show that A(n_0) is true alone from the fact that A(n_0 - 1) is true, that we have something akin to using weak induction, but I actually don't care where the contradiction comes from

    • @xizar0rg
      @xizar0rg 6 місяців тому +1

      Did you guys read the question? Dude is writing from the context of a high school level math class. He's asking "why does he say it one way sometimes and another way others". The exact reason Penn says it one way vs others is to differentiate between the assumptions made in his induction hypothesis.
      That they are logically equivalent (because each can be proven by the other) is irrelevant to the question being asked *at the level it is being asked*.

    • @toddtrimble2555
      @toddtrimble2555 6 місяців тому +1

      @@xizar0rg The "dude" seems to be a teacher, since he refers to occasions when he taught induction [grade 13? high school I guess]. And so, I thought an answer could be pitched at the level of one teacher speaking to another, with the expectation that he can then take the explanation into his personal understanding, and then rework it, if he wants, at a level that would be appropriate for his students. Anyway, your own answer didn't seem adequate for the explanation (and I expect you don't teach the subject yourself), and pardon me for saying so, but it sounded to me like you had some slight confusion and haven't mastered this stuff, at least not to a level where you could presume to teach it and answer considered and thoughtful questions about it.

  • @BederikStorm
    @BederikStorm 6 місяців тому

    Why not to just write x_n=sequence. Then get x_n squared equals 1/n + x_n. Write x_n as the solution of quadratic equation and find limit of that formula

    • @ClaraDeLemon
      @ClaraDeLemon 6 місяців тому

      You havent proved it converges, your trick doesnt guarantee the answer is correct.
      Let S = (((1+...)²+1)²+1)². Then S = (S+1)², and so S = -1/2 ± √(1-4)/2, which is a complex number. How could a sum of natural numbers give you a complex answer? Well, the sum is clearly divergent, its bigger than the 1+1+1+1+1+... series, so it diverges as well. If your trick worked always it should have given you infty as the default answer, but it doesnt, bc it only works for converging limits

    • @86congtymienbac80
      @86congtymienbac80 6 місяців тому

      I think you only use this trick when m is considered large enough

  • @ruffifuffler8711
    @ruffifuffler8711 7 місяців тому

    It could mean the end of hockey. puc Puc, ... puc, puc puc puc , ...and the egg..

    • @ruffifuffler8711
      @ruffifuffler8711 7 місяців тому

      In a defensive move, still useable in parametrics where the latent terms are irrelevant, and there is respectable definition for the use.

  • @jimiwills
    @jimiwills 6 місяців тому

    Because 1×1 - 1 = 0 but also 0×0 - 0 = 0, no?

  • @jursamaj
    @jursamaj 6 місяців тому

    It's not a question of "changing the order of limits" because there is only 1 limit in the problem to start with. *You* inserted the 2nd limit.
    Indeed, just looking at the problem statement, it is clear that you already have that infinite sum of zeros that you mentioned.

    • @RibusPQR
      @RibusPQR 6 місяців тому

      The expression had infinite terms, which is unwieldy. Much easier to approach having infinite terms with a limit.

    • @iloveolego
      @iloveolego 6 місяців тому

      In contrast to your argument I think it's obvious that principal way of evaluating initial limit is to somehow evaluate it at n=1 (not a trivial question), then at n=2, then 3 and so on and so forth
      But I totally agree that original problem is ill defined - in a way that limit in terms of n for infinite sequence S(n) e.g. infinitely nested roots actually IS a double limit and thus DNE
      the second limit is the number of terms that approaches infinity. This limit is implicit but it certainly is there. And it is inferior to explicit reformulated statement as lim(m,n)
      And then order of limits is not a question of personal opinion but should be stated in problem definition

  • @user-tg2gm1ih9g
    @user-tg2gm1ih9g 13 днів тому

    couldn't you ...
    x = sqrt(1/n + sqrt(1/n + sqrt(1/n + ....
    x^2 = 1/n + sqrt(1/n + sqrt(1/n + sqrt(1/n + ....
    x^2 = 1/n + x
    x^2 - x - 1/n = 0
    use quadratic formula
    x = (1 + sqrt(1-4/n))
    lim(n → ∞) x
    = (1+sqrt(1-4/∞))
    = (1+sqrt(1-0))/2
    = (1+sqrt(1))/2 = (1+1)/2 = 2/2 = 1

  • @study_math
    @study_math 6 місяців тому +1

    おいらの232番と実質同じだ~😁

  • @yurenchu
    @yurenchu 6 місяців тому

    So what this shows, is that
    lim_[n→ ∞] √(1/n + √(1/n + √(1/n + ...)))
    does _not_ equal
    √(0 + √(0 + √(0 + ...)))
    (Right?)
    But then, what about
    √( 1*√( 1*√( 1*...))) = √(√(√(...))) = √(0 + √(0 + √(0 + ...)))
    and
    lim_[x→1] √( x*√( x*√( x*...)))
    ???

    • @86congtymienbac80
      @86congtymienbac80 6 місяців тому

      √(0 + √(0 + √(0 + ...))) has 2 solutions: 0 and 1

    • @yurenchu
      @yurenchu 6 місяців тому

      ​@@86congtymienbac80 √(0 + √(0 + √(0 + ...))) doesn't have two solutions; it is convergent, and the limit is 0 , _as Michael shows in the video at __14:20_ .

    • @86congtymienbac80
      @86congtymienbac80 6 місяців тому

      @@yurenchu No, lim_[m→ ∞] [lim_[n→ ∞] √(1/n + √(1/n + √(1/n + ...)))] = 0 like this is correct
      lim_[n→ ∞] [lim_[m→ ∞] √(1/n + √(1/n + √(1/n + ...)))] = 1

  • @XY-vf7qy
    @XY-vf7qy 7 місяців тому +1

    At 14:12 it's ok to cancel out all the 1/n terms but then you're adding an infinite number of zeros and it turns out an indeterminate form 0*infinity, so you cannot evaluate the limit to 0 without any other calculation.

    • @BrollyyLSSJ
      @BrollyyLSSJ 7 місяців тому +3

      The point was that after swapping the order of limits, for any chosen m you're always dealing with finite number of zeros in the inner limit over n. Then, the outside limit over m is going through a sequence of zeros, so the limit is 0.
      What you're describing is more like dealing with the double limit (which doesn't exist, as pointed out in the video) - you could choose a sequence of (m, n) where they both linearly increase at the same rate and then you're dealing with adding infinitely many zeros inside the limit.

    • @anshumanagrawal346
      @anshumanagrawal346 7 місяців тому

      "Indeterminate form" is a misnomer. There is no such thing. This is just a tool to help Calculus students understand/remember you can't treat limits like substitute everything.

  • @leickrobinson5186
    @leickrobinson5186 6 місяців тому

    Today’s Michael Penn error is at 8:40. Can you find it?

  • @terryendicott2939
    @terryendicott2939 7 місяців тому

    This is the most complicated "proof" that 0 = 1. Cool.,

  • @klementhajrullaj1222
    @klementhajrullaj1222 6 місяців тому

    Your hairs again ...!

  • @joshuanugentfitnessjourney3342
    @joshuanugentfitnessjourney3342 7 місяців тому

    I got a difficult problem that might be impossible that i came up with,
    f(x)=1 +(1/f(x))
    Its supposed to be an infinite recursion type,

    • @sambhusharma1436
      @sambhusharma1436 7 місяців тому

      What is the question ❓

    • @jellymath
      @jellymath 7 місяців тому +1

      ​@@sambhusharma1436I guess "What (implicit) definition of f(x) satisfies the equation above for all real x?"

    • @strikeemblem2886
      @strikeemblem2886 7 місяців тому

      f(x) = 1 + 1/f(x) is not a recurrence relation.

    • @jellymath
      @jellymath 7 місяців тому

      @@strikeemblem2886 isn't that actually just a quadratic equation in y = f(x), meaning f(x) is a constant function?

    • @strikeemblem2886
      @strikeemblem2886 7 місяців тому +1

      @@jellymath Not necessarily. Any function f:R->{golden ratio, -golden ratio} works. Moreover these are the only such functions.

  • @vremiavremiavremiavremiaclock
    @vremiavremiavremiavremiaclock 7 місяців тому

    😊😊😊😊😊😊😊

  • @leif1075
    @leif1075 6 місяців тому

    WAIT A NINUTE I dint see why you rejected the NEGATIVE SIGN OPTION ..Remember. anegstive in front lf the swuare root doesnt mean the square roo t is a negstive njmber..it jjwt means its the negative kfnghe positive square root..and if you takenthat option wou get zero not a negstive njmber si i dont see why that isnt valid also..anyone else see this?

  • @charleyhoward4594
    @charleyhoward4594 7 місяців тому +1

    didn't enjoy this