How to deal with any recursive sequence.

Поділитися
Вставка
  • Опубліковано 2 січ 2025

КОМЕНТАРІ • 86

  • @MathFromAlphaToOmega
    @MathFromAlphaToOmega 3 роки тому +42

    In a linear algebra class, my professor explained why we might expect nr^n to be a solution to the recursion if there are repeated roots:
    For r and s distinct, the solutions are r^n and s^n. Any linear combination of solutions is another solution, so (r^n-s^n)/(r-s) is another solution. Now letting s go to r, we get the derivative of r^n in the limit, or nr^(n-1). That can be rescaled to give nr^n.

  • @sholinwright6621
    @sholinwright6621 3 роки тому +21

    I think we used to solve difference equation probs with the Z transform, but it’s been 30+ yrs since I’ve been in college. An interesting problem would be one with both difference terms and differential terms. This actually has a practical application as a circuit with both distributed elements and discrete elements.

    • @buxeessingh2571
      @buxeessingh2571 3 роки тому +2

      I remember the theory of non-linear differential difference equations being pretty limited without some "nice" forms combining the derivatives with the sequences.

  • @holyshit922
    @holyshit922 8 днів тому

    12:02 ways expanding this are
    1. By taking derivative of geometric series
    2. By using binomial expansion
    3. By series multiplication which will lead us to convolution of sequences

  • @0cgw
    @0cgw 3 роки тому +10

    Let X be the vector space of sequences taking values in the complex numbers. Let L be the left-shift operator, so that L(a,b,c,...)=(b,c,d,...). Suppose that r and R are solutions to the auxiliary quadratic equation. Defne an operator on X by S=(L-r)(L-R). It follows that if (a) is a sequence solving the recurrence relation, then S(a)=0. Next, let (b)=(L-R)(a) and (c)=(L-r)(a), then (since S=(L-r)(L-R)=(L-R)(L-r) ), we have (L-r)(b)=(L-R)(c)=0. But these are easy to solve, they are (b)=(A,Ar,Ar²,...) and (c)=(B,BR,BR²,...). Now, (r-R)(a)=(c)-(b), and hence if R and r differ, we have the required solution.
    What happens if R=r? In this case, (L-r)²(a)=0. Let (c)=(L-r)(a), then (L-r)(c)=0, so (c)=(A,Ar,Ar²,...). Thus (L-r)(a)=(A,Ar,Ar²,...). Now define the sequence (b) by (b_n=a_n/r^n and b_0=a_0), then L(b)=(A/r)(1,1,1,...). [If r=0 is a repeated root, the recurrence relation has solution (a_0,a_1,a_2,...)=(A,B,0,0,....), so we may assume that r is non-zero]. Thus b_n=Cn+D (where C=A/r) and a_n=(Cn+D)r^n for arbitrary constants C and D.

    • @olegt962
      @olegt962 3 роки тому +1

      Wow.. can you provide some reference, pls. I appreciate that.

    • @ojasdeshpande7296
      @ojasdeshpande7296 2 роки тому

      @@olegt962 no

  • @stewartcopeland4950
    @stewartcopeland4950 3 роки тому +4

    14:03 : rather an = (R + S(n+1)) * r^n

    • @demon-ow4vg
      @demon-ow4vg 7 місяців тому

      Yup, the index first from 0 gives a0 is 0
      Thus it's from n=1 of nr^(n-1)x^(n-1)
      But that is from n=0 of (n+1) r^n x^n
      I thought that I have made i mistake

  • @themalevolent9373
    @themalevolent9373 3 роки тому +45

    For anyone interested in reading more, this is just the general solution to a second order linear difference equation, taught in virtually every first year course in differential equations.

    • @Minskeeeee
      @Minskeeeee 3 роки тому +12

      difference equations weren't touched on at all in my intro differential equations course, I never knew that they were in first year courses!

    • @dyld921
      @dyld921 3 роки тому +7

      Differential Equations is not a first-year course, and it doesn't typically cover difference equations.

    • @themalevolent9373
      @themalevolent9373 3 роки тому +3

      @@Minskeeeee Huh, that's interesting. For us they were included in our first year ODE course, with the justification that they are essentially discrete analogues to ODEs. The methods used to solve both are almost identical.

    • @themalevolent9373
      @themalevolent9373 3 роки тому +4

      @@dyld921 Depends on the university I guess.

    • @HilbertXVI
      @HilbertXVI 3 роки тому +8

      Naw your typical ODE course only has... ODEs

  • @simonreiff3889
    @simonreiff3889 3 роки тому +4

    Very nice proof! I wish I had thought of using that lemma, which I did not know, as that makes this proof far more manageable. I got as far as the partial fraction decomposition when I realized that my roots of the denominator 1-Ax-B(x^2) would not quite be the same as r_1 and r_2. I tried sailing forth anyway by applying the quadratic formula to the denominator, setting the roots of that expression to alpha_1 and alpha_2, and wrote 1-Ax-B(x^2)=(x-(alpha_1))(x-(alpha_2)). I then tried to compute the partial fraction decomposition by hand and it got a bit gnarly, and when I realized I still has to relate my alpha roots to r_1 and r_2, I thought there was no way this is the solution that Michael Penn had in mind and decided to watch the video. With the lemma, you don't even need to compute the partial fraction decomposition itself at all! You just use the fact that PFD is possible, and then you move on immediately without finding explicit formulas for R and S, though we know that we can find such formulas depending on A, a_0, or a_1 if given those values. In retrospect, the better approach since I didn't know to use the lemma and had to derive it would have been to find the relationship between r_1 and r_2, on the one hand, and alpha_1 and alpha_2, on the other, before moving on to attempting a partial fraction decomposition. Really cool problem and video.

    • @diniaadil6154
      @diniaadil6154 3 роки тому +1

      if P = X²-Ax-B and Q = 1-Ax-Bx²,notice that we have Q(X)=X²P(1/X) meaning that the roots of Q are exactly the inverse of P roots. Since its leading coefficient is 1, the decomposition of Q follows as (1-r_1x)(1-r_2x)

    • @schweinmachtbree1013
      @schweinmachtbree1013 3 роки тому

      @@diniaadil6154 yup, there's even a name for this - it's called the reciprocal polynomial or reflected polynomial (although this might be slightly niche, I'm not sure)

  • @yoav613
    @yoav613 3 роки тому +4

    Nice. There is also the short version of blackpenredpen: suppose an=r^n then: r^(n+2)=Ar^(n+1)+Br^n. Now divide by r^n then r^2=Ar+B so r1 and r2 are the roots of x^2-Ax-B. Then R and S can be found by a0 and a1 that are given. a0= R+S ,a1= Rr1+Sr2 2 equations with R and S.

    • @csDiablo1
      @csDiablo1 3 роки тому +2

      What gives you the right to "suppose" the given claim ? How can you be sure this assumption can be made and such r exists, that an=r^n ?

    • @yoav613
      @yoav613 3 роки тому +1

      @@csDiablo1 what gives you the right to supppse an induction assumpsion? Well..you try and it works!!

    • @csDiablo1
      @csDiablo1 3 роки тому

      @@yoav613 fair point, but I was confused that you also assumed without stating that an=m1*r1^n + m2*r2^n (and used it in your last sentence) at the same time you supposed an=r^n

    • @schweinmachtbree1013
      @schweinmachtbree1013 3 роки тому +1

      Michael mentions this method before presenting his. Personally I like michael's method more because it actually gives an explanation for why we get what we get in the repeated roots case (there are tricks you can do with the other method though - once you have found one solution, there are ways of generating the second solution from it, which are analogous to methods taught in differential equations classes, but michael's approach doesn't require any such "tricks")

  • @shpensive
    @shpensive 3 роки тому +1

    Wow I really appreciated this! I learned a lot and was completely able to follow, and even did the side proof! Thank you :)

  • @goodplacetostop2973
    @goodplacetostop2973 3 роки тому +8

    17:06

  • @minwithoutintroduction
    @minwithoutintroduction 3 роки тому +2

    شكرا جزيلا لكم.تذكرنا بأيام الدراسة الجامعية.
    عمل رائع.
    تحياتي لكم من جنوب شرق المملكة المغربية

  • @MathHammer
    @MathHammer 3 роки тому +3

    I have an EE background. Z-transforms are the thing!

  • @mohitkrjain9396
    @mohitkrjain9396 3 роки тому

    13:40 shouldn't it be summation of (R+nS/r)*r^n*x^n?

  • @pavlopanasiuk7297
    @pavlopanasiuk7297 Місяць тому

    I guess linear algebra is an easier tool here. Arrange the matrix evolution law, in this case a{n+2} = Aa{n+1}+Ba{n}; a{n+1} = a{n+1} relates (a{n+2}, a{n+1}) to (a{n+1}, a{n}). It therefore can be related to (a1, a0) via matrix power, which (for diagonalizable matrix) is a linear combination of constant matrices with eigenvalues to the nth power. As long as eigenvalues are distinct, this works for higher order linear relations too.

  • @tomholroyd7519
    @tomholroyd7519 3 роки тому +2

    What? Nobody mentioned the Fibonacci sequence? What are R and S in that case?

  • @slavinojunepri7648
    @slavinojunepri7648 Рік тому

    Very helpful review for me. ❤

  • @Aditya_196
    @Aditya_196 8 місяців тому

    Finally found this thing i wished it for long time

  • @davidcroft95
    @davidcroft95 3 роки тому +10

    Can you also make a video on "non-linear" recursion? i.e. a_(n+2)=A*a_(n+1)+B*(a_n)^2 or a_(n+2)=a_(n+1)*a_n or something like those.
    Thank you in advance

    • @paulstelian97
      @paulstelian97 3 роки тому +1

      I don't think you're even gonna get nice solutions in the first place with that one, are you?

    • @davidcroft95
      @davidcroft95 3 роки тому

      @@paulstelian97 I don't know, that's why I'm asking XD

    • @schweinmachtbree1013
      @schweinmachtbree1013 3 роки тому +1

      @@davidcroft95 Non-linear recurrence relations are hard to find closed-form solutions to, in the same way that non-linear differential equations (let's say ordinary differential equations to keep things simple) are hard to find closed-form solutions to - there is an analogy between ODEs and difference equations (recurrence relations) that can be made precise.
      It is much easier to start with a function f and differentiate it twice and form a 2nd order ODE which you know is satisfied by f than to actually try and solve a random 2nd order ODE which has not been 'cooked up' in this way (indeed most random ODEs won't even have a closed-form solution which is an elementary function). In the same way, it is much easier to start with a sequence a_n and take it's discrete derivative twice -- that is, the forward difference operators Δa_n := a_{n+1} - a_n and Δ^2 a_n := ΔΔa_n = a_{n+2} - 2a_{n+1} + a_n -- and form a 2nd-order difference equation F(Δa_n, Δ^2a_n) = 0, which you know is satisfied by a_n. This can then be interpreted as a 2nd order recurrence relation G(a_{n+2}, a_{n+1}, a_n) = 0 (for example, the recurrence in this video is a_{n+2} - A a_{n+1} - B a_n = 0, so G(x, y, z) = x - Ay - Bz), and -- as for ODEs -- given a random 2nd order difference equation/recurrence relation, it will be much more difficult than one that has been specifically 'cooked up' in the manner just described, and closed-form solutions in terms of well-known functions may not even exist.

    • @davidcroft95
      @davidcroft95 3 роки тому

      @@schweinmachtbree1013 eh, as I thought, but it was worth trying.
      Thank you for your answer

  • @orenfivel6247
    @orenfivel6247 3 роки тому +1

    what about complex conj. roots and convert them to cos/sin form so that a_n will have a real sol. for̅m?
    can u pls make a video about it?

    • @orenfivel6247
      @orenfivel6247 3 роки тому

      As I inspected:
      If r1,2 are complex, w/
      r1 = r=u+i·v & r2 = r̅ = u-i·v:
      then
      a_n = R(r1)^n+S(r2)^n=...(complex algebraic manipulations)
      =|r|^n·{ [R+S]·cos(n·arg{r}) + i·[R-S]·sin(n·arg{r}) }
      Due to PFD [9:12] and the facts: r+r̅ =2·Re(r), r-r̅ =i·2·Im(r),
      we can solve for R+S and i·[R-S]
      R+S=a_0∈ℝ
      i·[R-S]={a_1 + [Re(r)-A]a_0}/Im(r) ∈ℝ

  • @MathSolvingChannel
    @MathSolvingChannel 3 роки тому

    The standard method to solve three-term recurrence relation and also it is better to include the repeated roots case and complex roots case to make it complete.

    • @alxjones
      @alxjones 3 роки тому +2

      At 10:41, he handles the repeated case. Whether or not the roots are real is actually irrelevant, though if one is particularly interested in solving the recurrence over just the reals, they should show that the proposed solution is real even when the roots are not.

  • @siyuanhuo7301
    @siyuanhuo7301 3 роки тому +1

    Can someone post the link to the original video?

  • @N9199
    @N9199 3 роки тому +2

    So it's basically rewrite using Linear Algebra and get the eigenvalues of the matrix?

    • @Wabbelpaddel
      @Wabbelpaddel 3 роки тому +1

      Yup. Very powerful, that's also where the exponential formula for the fibonacci numbers is from.

  • @yonathanashebir6324
    @yonathanashebir6324 Рік тому

    thanks

  • @nutriascoc6331
    @nutriascoc6331 2 роки тому

    thanks so much

  • @seanmacfoy5326
    @seanmacfoy5326 2 роки тому

    Hi Michael,
    Is there any way to generalise this reasoning to the solution of homogeneous second-order differential equations? Thanks!

  • @wyattstevens8574
    @wyattstevens8574 6 місяців тому

    "What about the seeds?"
    Well, first let's call A P and -B Q. I'll consider two sequences U, V that satisfy Aa_n+Ba_n-1= a_n+1: U has seeds 0=a_0 and 1=a_1, and the corresponding seeds for V are 2 and P. Now solve the following system of equations to find your form CA+DB= sequence in general starting a_0=A, a_1=B:
    [[0 2 | A]
    [1 P | B]].
    Now U and V have relatively simple closed forms: U_n= ((r_+)^n+(r_-)^n)/((r_+)-(r_-)) and V_n= (r_+)^n+(r_-)^n.

  • @ivarorno
    @ivarorno 3 роки тому +1

    Couldn't there be convergence issues with the sum? Surely all those rules only apply to convergent sums a priori?

    • @schweinmachtbree1013
      @schweinmachtbree1013 3 роки тому

      The method of generating functions uses what are called "formal infinite sums" in the mathematical lingo. That is, we don't worry/care about convergence - and this is rigorous; the point is that two generating functions are equal if and only if all their terms are equal, which often allows one to extract a closed form (e.g. at 10:08 in this video)

    • @ivarorno
      @ivarorno 3 роки тому

      @@schweinmachtbree1013 Oh right, so I guess the partial sums are the elements of the sequence? That's why you're not actually interested in the tail and can compare terms.

    • @schweinmachtbree1013
      @schweinmachtbree1013 3 роки тому

      @@ivarorno One doesn't really think of generating functions as sequences of partial sums - they really are just infinite sums where one totally ignores convergence* (a similar concept is formal power series - in fact a generating function is just a formal power series associated with a sequence). The intuition is that a generating functions is "a clothesline upon which one hangs a sequence" (to quote the author of Generatingfunctionology) which allow us to perform algebraic manipulations to glean information about the sequence.
      *If your concern is that ignoring convergence is logically dubious then here is a way to give a completely rigorous definition. The rules for addition and multiplication of sequences, which follow from the laws of arithmetic (e.g. the associative law and the distributive law), are (\sum_n a_n) + (\sum_n b_n) = sum_n (a_n + b_n) and (\sum_{n=0}^{infinity} a_n)(\sum_{n=0}^{infinity} b_n) = \sum_{n=0}^{infinity} (\sum_{i, j: i+j=n} a_i b_j). We can now "suck the life out of the generating functions" by writing (a_0, a_1, a_2, ...) instead of a_0 + a_1 x + a_2 x^2 + ..., or in sequence and summation notation, (a_n)_n instead of sum_n a_n x^n. One can then *define* addition and multiplication as (a_n)_n + (b_n)_n = (a_n + b_n)_n and ((a_n)_n)((a_n)_n) = (\sum_{i, j: i+j=n} a_i b_j)_n. Now there are no issues of convergence because there is nothing which may nor may not converge, because we abstracted it away. However people do not usually use this sequence formalism for generating functions in practice because it gets rid of all the intuition (we have abstracted away the variable x, so calling such a thing a generating FUNCTION would be very confusing).

    • @ivarorno
      @ivarorno 3 роки тому +1

      @@schweinmachtbree1013 The problem is that he's using rules for convergent sums a few times throughout the video which you would have to prove also for your abstractions, which feels pretty dubious, but if you claim that this has been done or the transition to the abstraction has been logically verified I'll take your word for it.

  • @smurfsmurfy6766
    @smurfsmurfy6766 3 роки тому

    Why can we transform R/(1-rx) into a geometric series if we don't know if |rx|

    • @olegt962
      @olegt962 3 роки тому +2

      Suppose that divergent series converge :v

    • @user-en5vj6vr2u
      @user-en5vj6vr2u 2 роки тому

      Modify the domain of f(x) so that abs(rx)

  • @MarcoMate87
    @MarcoMate87 3 роки тому

    Excellent video for a very interesting theorem. But what about the case when r_1 and r_2 are complex conjugates? If we are looking for a sequence a(n) with complex values there is no problem at all, but if we want a real-valued solution?

    • @schweinmachtbree1013
      @schweinmachtbree1013 3 роки тому +1

      the method still goes through: we get a_n = R (r_1)^n + S (r_2)^n where r_1, r_2 are conjugate. writing them in polar form as r_1 = Ae^{ix} and r_2 = Ae^{-ix} where A and x are real, we get a_n = R A^n e^{inx} + S A^n e^{-inx} = R A^n (cos(nx) + i sin(nx)) + S A^n (cos(nx) - i sin(nx)), and splitting this into terms terms involving i and those that don't we get a_n = A^n(R+S) cos(nx) + i A^n(R-S) sin(nx). If we want real solutions then, assuming R and S are real*, we need R-S = 0, giving a_n = 2R A^n cos(nx), so the solution is a sinusoid, which fits in to the overall picture because solutions like a_n = C r^n look like discrete sinusoids when r = -1 (other cases are r=1 where we get a constant solution, |r|>1 where the solution blows up, possibly alternating sign, and |r| < 1 where the solution decays, possibly alternating sign, and linear combinations of such possibilities, i.e. a_n = C r^n + D s^n)
      (*if we are already given R and S and they are complex, then perhaps there is a more complicated decomposition into real and imaginary parts from which one can deduce real solutions)

  • @FrançoisAllouin
    @FrançoisAllouin Рік тому

    thanks !

  • @theproofessayist8441
    @theproofessayist8441 3 роки тому

    And just to check I'm guessing two complex roots does not make any sense because the complex numbers are not well ordered so we can't form a sequence of complex numbers and so we didn't bother checking that case?

    • @schweinmachtbree1013
      @schweinmachtbree1013 3 роки тому

      complex roots would make sense. the method in the video still goes through fine, just we get a complex solution out the other side (however there are certain conditions you can impose on R and S to arrange these solutions are, in fact, real valued - there are some comments about this in this comment section if you are interested).
      you can form a sequence from whatever set you want, regardless of any ordering (well-ordered or otherwise); a sequence _{n=0,1,2,...} from a set X is just a choice of elements x_0 ϵ X, x_1 ϵ X, x_2 ϵ X, and so on (e.g. we have the constant sequences where all of these are equal, or you could have the sequence of complex numbers _{n=0,1,2,...} which will go around the unit circle in the complex plane forever without ever repeating, or the sequence _{n=0,1,2,...} which will spiral outwards - there's all sorts you can do :D)

  • @juandesalgado
    @juandesalgado 3 роки тому +2

    It gets funnier if you add a constant to the recurrence rule.

  • @Happy_Abe
    @Happy_Abe 3 роки тому

    How do we know the generating function converges

    • @RandomBurfness
      @RandomBurfness 3 роки тому +1

      When generating functions are used, convergence is not even considered.

    • @Happy_Abe
      @Happy_Abe 3 роки тому

      @@RandomBurfness why’s that?

    • @RandomBurfness
      @RandomBurfness 3 роки тому

      @@Happy_Abe It's by convention.

    • @schweinmachtbree1013
      @schweinmachtbree1013 3 роки тому

      @@Happy_Abe Two generating functions are equal if and only if all of their terms are equal, regardless of convergence, which is what "makes them tick" (e.g. michael uses this at 10:08 in this video to obtain the closed form for a_n, which was the point of using generating functions here). So whether or not the infinite sums actually converge doesn't matter for the methods/theory of generating functions, so we just ignore convergence (we sometimes refer to the infinite sums as "formal infinite sums" when we are thinking in this way)

    • @Happy_Abe
      @Happy_Abe 3 роки тому +1

      @@schweinmachtbree1013 thanks for this explanation

  • @비기-y8c
    @비기-y8c 3 роки тому

    Very nice video

  • @nna7yk
    @nna7yk 3 роки тому +1

    How to deal with any recursive sequence... I think, not any... ;)
    a₀=0 , a₁=2, an₊₁= √{ 2 - an₋₁/an] }, Calculate&prove Lim(n → ∞) (2ⁿ)×an, where n≥1∈Z,
    ( or the same with simple symbol: a_(0)=0, a_(1)=2, a_(n+1)=sqrt{2-[a_(n-1)/a_(n)]}, Calculate&prove LIM (n->inf) (2^n)*a_(n), where n>=1∈Z )

  • @jesusalej1
    @jesusalej1 3 роки тому +2

    Why make it easier, if it can be harder!

  • @marcorademan8433
    @marcorademan8433 3 роки тому +4

    This screams Z-transform. Sorry guys, engineering student here.

  • @kujmous
    @kujmous 3 роки тому

    Pausing at the start. Getting shot glass. Take a shot every time he says Fibonacci. Two for every mention of the Golden Ratio or Phi.

  • @MultiAndAnd
    @MultiAndAnd 3 роки тому

    Just diagonalize the matrices associated to the sequence.

  • @CM63_France
    @CM63_France 3 роки тому

    Hi,
    Nice!

  • @changjeffreysinto3872
    @changjeffreysinto3872 3 роки тому

    Kinda Olympiad combinatorics: method of characteristic equations

  • @bravobessa3684
    @bravobessa3684 3 роки тому +1

    The fact is wrong... or not entirely true.
    If a(n+2)= Aa(n+1)+ Na(n)
    If the polynomial X^2-AX--B has two distinct roots then a equals...
    The case where the roits is double has an issue. Because the then is at the wrong place
    Or a if and and nly if

  • @guruji75
    @guruji75 3 роки тому +4

    Time to go down the rabbit hole… I have no idea what generating functions are. It might be a good idea to explain what they are and how they work

    • @ConManAU
      @ConManAU 3 роки тому +4

      Generating functions are basically power series where the coefficients are a sequence you want to work with. So they’re a_0 + a_1 x + a_2 x^2 + … where a_n is the sequence. And you mainly use them the way Michael does here - manipulating the generating function using, e.g. Taylor series expansions, to find stuff out about the sequence.

    • @alxjones
      @alxjones 3 роки тому +4

      If you have any sequence a_n, then f(x) = sum a_n x^n is the generating function of that sequence. The important thing to note is that convergence of that power series is NOT considered -- we call this a "formal power series", as in it has the "form" of a power series without the "details" of convergence.
      As for how they work, you can see in this proof how they are typically used. The fact that the series is taken up to infinity means that we can do manipulations to the "front" of the sequence and push some of their effects down the back in a way that we don't see, allowing us to arrive back at where we started with some additional terms ( like checking in new guests at a full Hilbert Hotel: en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel#Finitely_many_new_guests ).
      The upshot is a new expression for f(x) in terms of f(x) which encodes the effect of iteration by the given recurrence, which we can solve to get an explicit expression for f(x). Then, if that explicit expression can be written as a formal power series, its coefficients must be the coefficients of f(x), and thus the elements of the sequence defined by the recurrence which f(x) generates, i.e. the solution.

  • @laetvm6793
    @laetvm6793 3 роки тому +1

    "Python, go brrrr"

  • @HoSza1
    @HoSza1 5 місяців тому

    HOW is this "any" recurrence? It's only a very narrow class of all the recursions possible. This clickbaity sneakiness is exactly why I unsubscribe from "any" channels. Enough is enough.