Markov Matrices

Поділитися
Вставка
  • Опубліковано 25 січ 2025

КОМЕНТАРІ • 41

  • @boruiwang1738
    @boruiwang1738 3 роки тому +15

    Huge thanks to you!! Very clearly explained at a comfort pace. Its nearly final and my teacher's only covering the theorems and some calculation examples. This mit series really showed me what matrices could achieve and the connection between concepts. (I especially like the fibonacci part and this partical part) Good job!

  • @DirkGently-p3v
    @DirkGently-p3v Рік тому +20

    Herein we observe an advantage of being left-handed. :)

  • @prajyot2021
    @prajyot2021 2 роки тому +7

    Such brief and impeccable lecture
    Totally enjoying it

  • @nilslorand
    @nilslorand 2 роки тому +5

    love his enthusiasm :)
    Good video

  • @surajmirchandani4613
    @surajmirchandani4613 5 років тому +6

    Best one yet. Really cleared everything up in this chapter.

  • @mauisstepsis5524
    @mauisstepsis5524 3 місяці тому +1

    Good example, but this video is missing a very import part of DTMC: transition of distribution from conditional probability.

  • @fedepan947
    @fedepan947 4 роки тому +15

    Thank you! Good explanation.
    But I think it is not necessary to calculate the decomposition A = UDU-1.
    We know that the probability after k steps is Pk = c(λ1)(^k)x1 + d(λ2)(^k)x2 where x1 and x2 are the eigenvectors and λ1, λ2 the eigenvalues, with P0 we can calculate the coefficients c and d for k=0. After 100 steps the probability is Pk for k = 100.

    • @dexterity3696
      @dexterity3696 4 роки тому +4

      Definitely, maybe he hasn't taken the course by prof. Strang. LOL

    • @thedailyepochs338
      @thedailyepochs338 4 роки тому +2

      lol i was expecting him to that and he never did

    • @thedailyepochs338
      @thedailyepochs338 4 роки тому +2

      @@dexterity3696 he definitely didn't, if he did he would have named the eigenvector S and the diagonal eigenvalue matrix capital Lambda

    • @nprithvi24
      @nprithvi24 3 роки тому +4

      I guess the main point of recitation is not just to solve for an answer but make students recall previous methods discussed in the class. For example, calculating the inverse of a matrix part was relatively discussed 3-4 lectures before this one and there's a good chance students might have forgot about it. This tutorial was a good refresher.

  • @lavacoop1792
    @lavacoop1792 Місяць тому +1

    i presume it went to electron decomposition

  • @Amit.58
    @Amit.58 Рік тому +1

    Wow quite amazing problem❤❤❤

  • @mohamedhason7838
    @mohamedhason7838 2 місяці тому

    ABSOLUTELY MINDBLOWING!

  • @박현진-d7h4d
    @박현진-d7h4d Рік тому

    So interesting lecture and problem on Markov matrix!

  • @kostikoistinen2148
    @kostikoistinen2148 2 роки тому +1

    This guy can explain things well. He says, "Welcome back." Now I’m trying to find the first video for which this video is a sequel. Could someone tell me where that first video is?

    • @mitocw
      @mitocw  2 роки тому +2

      The UA-cam playlist for the course: ua-cam.com/play/PL221E2BBF13BECF6C.html. The course materials on MIT OpenCourseWare: ocw.mit.edu/18-06SCF11. Best wishes on your studies!

  • @AnupKumar-wk8ed
    @AnupKumar-wk8ed 6 років тому +3

    Very good video and very clearly explained.

    • @abhilast6629
      @abhilast6629 6 років тому +2

      Hey Indian bro do you love mathematics?

    • @AnupKumar-wk8ed
      @AnupKumar-wk8ed 6 років тому +1

      @@abhilast6629 Sure I do.

  • @peterhind
    @peterhind 2 роки тому +1

    So I sort of understand right until the end. With the final probability for n = infinity, being one third, one in two; how does that translate to the answer to the question 'What is the probability it is at A and B after an infinite number of steps'. Is the answer that it's six more times as likely to be at B than A ?

    • @Oleg86F
      @Oleg86F Рік тому +1

      We start with matrix A and vector p0=(1,0) - meaning 100% probability particle in the point A. After infinite number of steps (which are A^n * p0 we approaching to the vector (1/3, 2/3) which means : particle in point A- 1/3 (~33% probability) particle in point B - 2/3 (~67% probability)

    • @peterhind
      @peterhind Рік тому

      @@Oleg86F Thanks, It's making more sense now

  • @josefahed9002
    @josefahed9002 Місяць тому

    If i want the P0 to be (0,1) and not (1,0) should i change the transition matrix ?

  • @stephenclark9917
    @stephenclark9917 7 місяців тому

    The Markov matrix A is a transpose of what is usually presented.

  • @benbug11
    @benbug11 4 роки тому +1

    Very well explained, thank you

  • @Maunil2k
    @Maunil2k 9 місяців тому

    Very well explained !!

  • @adamjahani4494
    @adamjahani4494 2 місяці тому

    MIT students are so lucky...

  • @ankanghosal
    @ankanghosal 3 роки тому

    Very helpful video. Thanks mit

  • @ricardoV94
    @ricardoV94 3 роки тому

    I get different eigenvalues: (1, -0.2)

  • @levihuddleston1020
    @levihuddleston1020 5 місяців тому

    Rad, thanks!

  • @richard_guang
    @richard_guang Рік тому +1

    This guy reminds me of Will from good Will hunting

  • @theodorechan4343
    @theodorechan4343 10 місяців тому

    this was great

  • @federizz686
    @federizz686 3 роки тому

    Love this

  • @cssaziado
    @cssaziado 6 років тому

    Thank you, m7

  • @EmanuelCohen-HenriquezCiniglio

    Goat

  • @GoatzAreEpic
    @GoatzAreEpic Рік тому

    ty fam

  • @fackarov9412
    @fackarov9412 3 роки тому

    cool

  • @JosephKings-j9f
    @JosephKings-j9f 10 місяців тому

    gg

  • @reginalnzubehimuonaka6659
    @reginalnzubehimuonaka6659 2 роки тому

    For an MIT solution, it lacks some proof. We do not always see, we need a detailed explanation.
    But it is fine.

  • @josefahed9002
    @josefahed9002 Місяць тому

    If i want the P0 to be (0,1) and not (1,0) should i change the transition matrix ?