18. Markov Chains III

Поділитися
Вставка
  • Опубліковано 16 лис 2024

КОМЕНТАРІ • 15

  • @dasnyds00
    @dasnyds00 11 років тому +15

    This is incredible. First time seeing Markov Chains, and I can't believe I never took the time to check it out. Plugging in the Poisson blew my mind.

  • @kercker
    @kercker 9 років тому +10

    The lecture is marvelous. Thanks, professor John Tsitsiklis.

  • @videofountain
    @videofountain 7 років тому +6

    Thanks. The MIT website provides pdfs of the prepared slides. So the student can see a clear pdf side by side to the video on their computer. The pdf file can be downloaded. Window placement can be side by side. The State ID Numbers (IDs)are absent when I view the pdfs for this lecture. The Professor John Tsitsiklis does refer to the IDs. Some PDF readers including Adobe products can annotate with a text box the downloaded pdf. You can add the same IDs fresh in your copy of the document. The annotation can make the lecture easier to follow.

  • @oakschris
    @oakschris 8 років тому +4

    I do think that calculating the eigenvectors of the system is an easier way to deal with figuring out the space of the steady space and the rate at which convergence happens. Definitely worth examining as a comprehension supplement. Basically, the space defined by the eigenvectors of eigenvalue 1 is the steady space, and the largest eigenvalue not equal to 1 is inversely proportional to the rate of convergence.

    • @RalphDratman
      @RalphDratman 7 років тому

      I agree

    • @Yangyang-1995-
      @Yangyang-1995- 7 років тому +1

      It's not easy when the transition matrix is large ...but may be it is easier for computer..

  • @readytolearn7719
    @readytolearn7719 Рік тому

    All the other lectures were easier to grasp. This was probably the toughest to grasp.

  • @boongbaang1124
    @boongbaang1124 5 років тому +1

    When we lump more than 1 state into a class, and find the probability of entering the lump or the time needed to reach that lump: how do we calculate the values for individual states within the lump?

    • @AZTECMAN
      @AZTECMAN 4 роки тому +1

      A lump is a collection of classes. You should distinguish a lump from a recurrent/transient class. The lumping is useful in our 'expected time until absorption' example, but not for the 'probability of eventually entering a certain recurrent class' example (if we lumped here, we'd always get probability 1).

  • @boongbaang1124
    @boongbaang1124 5 років тому

    In the previous example, when we were at state 2, we directly took 0.2 as the probability to reach state 4. But here at 44:08, why are we using the u2 value?

    • @AZTECMAN
      @AZTECMAN 4 роки тому

      I think you are mistaken; in the previous example a2 is equal to 0.2 + 0.8 * a1 (not just 0.2)

  • @nerdkid8251
    @nerdkid8251 2 роки тому

    may I ask how was 106 phone lines computed? I tried setting pi_b = 0.01 and substituted some numbers to calculate the i needed using the equation at 30:15, but seems the RHS of the equation is getting larger with larger i...

  • @leonawu5506
    @leonawu5506 6 років тому

    That's really helpful!! Thanks!!

  • @pil-jaepark4882
    @pil-jaepark4882 3 роки тому +1

    Steve Jobs should've watched (23:25). At the dawn of new phone era, it is destined to use iPhone.