Markov Chains & Transition Matrices

Поділитися
Вставка
  • Опубліковано 29 лис 2024

КОМЕНТАРІ • 165

  • @nehabose3181
    @nehabose3181 3 роки тому +56

    Best video on Markov Chains. So easy to understand and no unnecessary analogies. Great job!

  • @blacksages
    @blacksages 3 роки тому +3

    THANK YOU! God, i finally understood how to make the damn matrix. My professor is so bad at explaining things and he just goes by the sum version, and doesn't explain things.
    This is such a life savior !

  • @LisaZuckberg
    @LisaZuckberg Рік тому +1

    OMG! This is the best video on Markov Chains. I just spent 30 mins on reading articles on medium, brilliant, wikipedia, etc and couldn't understand what they meant at all. But 4 mins into this video, I got it!

  • @tyfoodsforthought
    @tyfoodsforthought 4 роки тому +24

    So crisp. So clean. So clear.

  • @titobruni987
    @titobruni987 3 роки тому +16

    I've been watching math videos for a few years and I have to say that your channel is the best. You just teach in a extremely organized and interesting way. Please keep on!

    • @DrTrefor
      @DrTrefor  3 роки тому +5

      Thank you so much!

  • @dhrumilburad5859
    @dhrumilburad5859 3 роки тому +25

    What a video man, what an explanation. I literally understood the concept in one go. Keep it up !!!!!!

  • @jimf2482
    @jimf2482 8 місяців тому +2

    Dr Trefor, you're a blessing. Thank you for such clear explanations. They're liquid gold.

  • @kidnard6017
    @kidnard6017 3 роки тому +27

    Dude, you are a magician, the way u explain it ! Seems so easy, and make so much sense, thank you so much ! Please do more part !

  • @anaibrahim4361
    @anaibrahim4361 3 роки тому +3

    don't know how to thank you sir
    this deserves to be paied for
    really great job and pure gold

  • @frozenkingfrozenking6989
    @frozenkingfrozenking6989 Рік тому +1

    Good, very good. Some people on ytube are just afraid of writing math when ever they are teaching, and just mistify the subject. This is good math indeed.

  • @Anthony-ig6ds
    @Anthony-ig6ds Місяць тому +1

    BEST EXPLAINATION EVER I"M COMING BACK TO SEE IF YOU CAN EXPLAIN THE HARDER STUFF THIS WELL

  • @steng9887
    @steng9887 7 місяців тому +1

    Simply excellent explanation. In 6 minutes you made me understood what I tried to study in a week

  • @xxg-forcexx8734
    @xxg-forcexx8734 2 роки тому +30

    Generally speaking the rows are "from state A" and the columns are "to state B" within the literature (so invert his matrix along the diagonal) and it would have been nice to see the even simpler form of P using eigenvalues and eigenvectors to create AD(A^-1)=P to even better show how this generalises transitions and then shows the rate at which the markov chain converges

  • @SAAARC
    @SAAARC 4 роки тому +3

    The explanations are easy to understand and the video length is at the sweet spot. Great job!
    Looking forward to the rest of the series.

    • @DrTrefor
      @DrTrefor  4 роки тому +2

      Thank you, glad you're enjoying!

  • @tongtong168
    @tongtong168 3 роки тому +2

    Your explanation is much better than the Khan's Academy lets say. So detailed and so simple to understand.

    • @DrTrefor
      @DrTrefor  3 роки тому +1

      Thank you so much!

    • @anti-tankartur677
      @anti-tankartur677 2 роки тому

      His video is completely wrong about the matrix positioning

  • @amankrishna751
    @amankrishna751 2 роки тому +2

    Give this man an award!

  • @ww905
    @ww905 3 роки тому +1

    Wow, I was writing up my thesis on TMMC application to my little chemical adsorption model and I cannot understand the Maths behind it properly. You saved my life.

  • @gouthamshiny3346
    @gouthamshiny3346 3 роки тому +2

    best explanation I could find on youtube

  • @joelipowski8393
    @joelipowski8393 Рік тому +1

    Crystal clear explanation. Direct and easy to understand. Thank You!

  • @databridgeconsultants9163
    @databridgeconsultants9163 3 роки тому +1

    Wow. Thank you very much. What a way to make this look so easy. I understood this concept for the first time in my life.

  • @oleksandrrechynskyi7636
    @oleksandrrechynskyi7636 3 роки тому +5

    that's what I call a straightforward explanation. Thank's a lot!

  • @Sid-xt3kt
    @Sid-xt3kt Рік тому +5

    This guy saving my linear algebra grades

    • @Sid-xt3kt
      @Sid-xt3kt Рік тому

      also i just realized that markov chains look like finite state machines

  • @JohnSmith-qp4bt
    @JohnSmith-qp4bt 2 роки тому +2

    Clear explanation. Well poised and articulated. Makes its interesting, even without illustrating a real life practical example in the video. Also, a true desire to teach.

  • @HM-he1ob
    @HM-he1ob 3 роки тому

    You had shed lights to people like me who suffered a lot from a college class which takes about 90 min

  • @taotaotan5671
    @taotaotan5671 3 роки тому +2

    So we can apply eigendecomposition to simplify the matrix exponentiation! Thanks Trefor!

    • @DrTrefor
      @DrTrefor  3 роки тому +2

      Absolutely! That was beyond the scope of this video, but would definitely be the next thing to do.

  • @laxshanganasan1680
    @laxshanganasan1680 15 днів тому

    i have an exam today on this topic and you clearly explained it to me

  • @mr2seis388
    @mr2seis388 7 місяців тому

    This guy gave a 6-minute crash course where I started so confused. my man.

  • @skepticbubble3166
    @skepticbubble3166 2 роки тому +2

    Our Markovian hero, thanx

  • @tristanlouthrobins
    @tristanlouthrobins 3 роки тому +1

    Incredibly good explanation of Markov Chains. Subscribed!

  • @justsayin...1158
    @justsayin...1158 Рік тому

    Thank you for this very practical video, I was immediately able to apply this concept, although I didn't immediately understand why multiplying the transition matrix with the current state vector yields the next state vector, but after some further consideration, what this multiplication actually does, it is quite clear, why/how that works.

  • @michaelc.4321
    @michaelc.4321 2 роки тому

    This just blew my mind because it made me realize that the final convergent state of a markov chain is dictated by the transition matrix's eigenvector corresponding to its largest eigenvalue because the repeated multiplication essentially comprises the power method of finding the largest eigenvector/value.

  • @elakhe-llonamlomzale4774
    @elakhe-llonamlomzale4774 3 роки тому +2

    Simple and comprehensive, thank you

  • @MrMahankumar
    @MrMahankumar 3 роки тому +1

    I cried.
    This was very good

  • @arsenalaman6493
    @arsenalaman6493 4 роки тому +2

    Amazing video sirrr......Thank you for video. Loves from India

  • @shreyasingale
    @shreyasingale 4 роки тому +2

    Thanks for the lucid explanation!

  • @gautam1940
    @gautam1940 3 роки тому +1

    Lovely. I think even Markov would not be able to explain like that !!! Liked and Subscribed!!!

    • @DrTrefor
      @DrTrefor  3 роки тому +1

      Thanks for the sub!

  • @pallabkumar5775
    @pallabkumar5775 7 місяців тому

    That was a wonderful explanation of the Markov chain, thank you

  • @thomasmale2302
    @thomasmale2302 Рік тому +1

    I liked your explanation it was simple and clear, thank you so much.

  • @omedhassan2190
    @omedhassan2190 3 роки тому +1

    Simple and comprehensive.Thank you soooooo much

  • @safwanrushdan5260
    @safwanrushdan5260 5 місяців тому +1

    i am safwan, good video👍🏻🙏🏻

  • @itays7774
    @itays7774 2 роки тому +2

    Also, the diagonalization of a general two state transition matrix is quite nice, so taking a high power of one is not so bad

  • @蔡小宣-l8e
    @蔡小宣-l8e 2 роки тому

    Thank you Dr. Trefor Bazett! 谢谢!

  • @arjunag7553
    @arjunag7553 6 місяців тому

    You, Sir, are a Superhero.❤

  • @interest21stcentury74
    @interest21stcentury74 3 роки тому +1

    Wow! Interesting Topic! Thank You for covering something wonderful!

    • @DrTrefor
      @DrTrefor  3 роки тому +1

      Glad you enjoyed it!

  • @mohammedalsubaie3512
    @mohammedalsubaie3512 2 роки тому +7

    thank you for your video it is well explained, but at 3:19, the matrix isn't supposed to be the way around? I mean the 0.25 shouldn't be in the place of 0.4? because the rows explain the directions, not the columns?

    • @MrVoronoi
      @MrVoronoi 3 місяці тому +2

      yes, you are right

  • @hnrajaonarison5034
    @hnrajaonarison5034 3 роки тому +1

    I really liked your easy explanation. Thank you.

  • @Darkev77
    @Darkev77 4 роки тому +5

    Brilliant to say the least

  • @kazeemkz
    @kazeemkz 3 роки тому

    Spot on delivery Dr, many thanks

  • @lume-eugene.h2161
    @lume-eugene.h2161 2 роки тому +1

    Thank you, I think I will be able to ace the CS 70 final exam at Berkeley.

  • @somcana
    @somcana 3 роки тому +1

    Why would some one dislike your Videos. They must be in a dislike Markov state. I wonder when they will transition Dr Trefor Bazett.

  • @kekoHere0610
    @kekoHere0610 2 роки тому +1

    You just saved me !
    Thanks

  • @leandrocabezas7379
    @leandrocabezas7379 Місяць тому

    I am impressed, wayyyy too good. Liked and Subscribed

  • @MuhammadAli-ut1sh
    @MuhammadAli-ut1sh 3 роки тому +1

    Awesome , cleared my concept , Thank you !

  • @dewanmohammedabdulahad527
    @dewanmohammedabdulahad527 3 роки тому +6

    Thank you for the lecture. It's easy to understand. Do you have any plan on Non-linear control theory (obeviously in easy way llke you taught now)?.

  • @eliasdargham
    @eliasdargham Рік тому

    Absolutely clear and concise, thank you!
    It worth noting however, that computing the P^n matrix is very computationally expensive, is there a better way to to solve for P^n without having to do the power?

  • @PardeshDhakal-x4b
    @PardeshDhakal-x4b 9 місяців тому

    Very well explained sir! Thank you.

  • @slamburglar909
    @slamburglar909 4 роки тому +1

    great video! there's so much more you can talk about concerning markov chains, this is just the beginning! Like how they can limit to some stationary matrix under certain conditions of the transition matrix P, or even easier ways to calculate P^n (if you decompose it such that P=U D U^-1, where U is the matrix of eigenvectors and D is the matrix of eigenvalues, then P^n = U D^n U^-1, where D is simply the matrix of only eigenvalues^n along it's diagonal). They are very interesting indeed, you have your work laid out for you! XD

    • @DrTrefor
      @DrTrefor  4 роки тому

      Totally! I am thinking of doing some follows we are just scratching the surface here

  • @korakatk318
    @korakatk318 7 місяців тому +1

    Awesome video!

  • @somenewkid6892
    @somenewkid6892 4 роки тому +1

    wow it just so happens to be that the lecture today included transition matrices! what luck!

  • @ayyoubm
    @ayyoubm 3 роки тому +1

    GREAT EXPLANATION!

  • @joejoe-lb6bw
    @joejoe-lb6bw 3 роки тому +1

    Nice! Even I understood that.

  • @BrotherNineinChicago
    @BrotherNineinChicago 2 роки тому +1

    Brilliant explanation thank you :)

  • @arindamkesh4762
    @arindamkesh4762 8 місяців тому

    Here's a specific question. Can you help solve this?
    LRA: Please calculate 5 years LRA(Long Run Average) PiT transition matrices for each rating class.
    Rating R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 Default
    R1 77.49% 13.10% 3.20% 2.10% 1.02% 0.96% 0.75% 0.64% 0.52% 0.21% 0.01%
    R2 5.00% 70.44% 4.10% 4.04% 3.20% 3.10% 2.90% 2.85% 1.80% 1.46% 1.11%
    R3 4.12% 5.12% 72.00% 5.14% 3.11% 2.80% 1.70% 1.66% 1.55% 1.42% 1.38%
    R4 2.14% 2.80% 3.81% 72.96% 3.45% 3.42% 2.92% 2.60% 2.00% 1.99% 1.91%
    R5 1.20% 1.36% 1.51% 1.72% 76.14% 4.10% 3.20% 3.13% 2.99% 2.35% 2.29%
    R6 0.11% 0.19% 0.21% 0.28% 4.35% 73.88% 7.19% 4.42% 3.27% 3.10% 3.00%
    R7 0.10% 0.21% 0.31% 0.36% 0.98% 1.55% 74.68% 8.88% 5.28% 4.02% 3.63%
    R8 0.13% 0.24% 0.38% 0.48% 1.20% 1.56% 5.45% 71.23% 7.26% 6.10% 5.97%
    R9 0.12% 0.23% 0.36% 0.44% 1.21% 1.54% 3.20% 4.11% 65.67% 11.91% 11.21%
    R10 0.10% 0.22% 0.34% 0.46% 1.20% 1.55% 2.60% 2.72% 4.32% 70.28% 16.21%
    Default 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 100.00%

  • @arsenalaman6493
    @arsenalaman6493 4 роки тому +5

    I am Almina khatun who also comment on our video sir.......I al ways first

  • @OfferoC
    @OfferoC 4 роки тому +1

    very good explanation. thank you.

  • @marcocaballero
    @marcocaballero 2 роки тому +2

    Great video, thanks!! Any chance to follow up on this topic? Perhaps look into Markov Models?

  • @ThabetMarwa
    @ThabetMarwa 2 роки тому +1

    This was absolutely brilliant. This video could also be used to explain quantum spin 1/2; just make a and b stand for spin up and spin down

  • @teflonpan115
    @teflonpan115 3 роки тому

    Thank you for confusing me. Great work 👍

  • @praisesharon4241
    @praisesharon4241 9 місяців тому

    Very nice explanation

  • @Darkev77
    @Darkev77 4 роки тому +2

    If we were to line up the probability distributions to = 1 along the rows, rather than the columns that wouldn’t work (keeping the vector unchanged). Is that because of how it’s defined, due to the notation used?

    • @DrTrefor
      @DrTrefor  4 роки тому +2

      Indeed, it's just a quirk of the definition. If you wanted to do it your way, you'd have to be multiplying with the vector on the left instead, which would be just as good but not as conventional.

  • @SiriusFuenmayor
    @SiriusFuenmayor 2 роки тому +1

    Great! very clear and concise, what is the connection of this with turing machines?

  • @anya7us
    @anya7us 3 роки тому

    Lovely explanation

  • @AryanKumar-qo6fi
    @AryanKumar-qo6fi 3 роки тому +3

    Respect!!!!!!✌✌ >>>Legend👏

  • @KhoaLe-oc6xl
    @KhoaLe-oc6xl 3 роки тому +1

    Your 6 minutes = my professor’s 1 hour

  • @luisanaencarnacion6050
    @luisanaencarnacion6050 Рік тому

    Thank you man! This was so helpful☺️

  • @bscutajar
    @bscutajar 2 роки тому

    At first I thought the result won't always add up to 1, but it can be easily shown that if both columns of the P matrix and of course the one column of the S matrix add up to 1, the product's column will also add up to 1.

  • @anderson4429
    @anderson4429 3 місяці тому

    absolutely amazing

  • @devendraparmar7068
    @devendraparmar7068 11 місяців тому

    Beautiful video Sir..👌👌

  • @mxlexrd
    @mxlexrd 3 роки тому +3

    This Markov process feels vaguely quantum mechanical to me, the idea of probabilities spreading out over time over multiple states.

  • @DJ-dk3hh
    @DJ-dk3hh 2 місяці тому

    I am a bit confused on how we came up with S0, if we had 3 vectors how do you come up with S0? Watching the previous video helped me understand how S1 was derived, but cannot understand how S0 the initial state was derived. Why not .5/.5?

  • @hiamy1250
    @hiamy1250 9 місяців тому

    omg this video helps me a lot! thanks a ton

  • @murthyrallabandi
    @murthyrallabandi 3 роки тому +1

    It sounds good, i can apply this to Roulette game! 😅

  • @surgeonrecords
    @surgeonrecords 2 роки тому

    very clear. nice work.

  • @vwlh8r
    @vwlh8r Рік тому

    I did not see a link to the video you referenced introducing matrix multiplication

  • @Xeando
    @Xeando 2 роки тому

    It would've been extremely helpful if you went through more examples at the end, like s4 or s6 or whatever

  • @MyFirstReurveBow
    @MyFirstReurveBow 8 місяців тому

    thank you for your videos . if you will explain the logic behind it and not the matrix structure / equation structure perspective it will be much easier to understand. also first video is not on the list

  • @anaisliu6709
    @anaisliu6709 2 роки тому

    This is an awsome video however I am still confused that is it possible to calculate the transition matrix using only the initial probabilities? Or calculate the initial probabilities using only the transition matrix?

  • @tamoghnasarkar3829
    @tamoghnasarkar3829 3 роки тому

    I didn't understand how the S2 vector was determined.
    If S2 is just the states of B, it should just be 0.6 and 0.4 right?how do we have a 0.66 and 0.34?

  • @duckymomo7935
    @duckymomo7935 4 роки тому

    When does a Markov chain converge into steady state?
    How many steps does it take to converge?
    Memory less ness property explained

  • @simplicitas5113
    @simplicitas5113 Рік тому

    I derived this before knowing what it was

  • @lucycai3356
    @lucycai3356 3 роки тому +1

    at 3:32, I think the row in the matrix should add up to 1. am I correct? Thanks!

  • @internationaleconomics2327
    @internationaleconomics2327 3 роки тому +1

    Thank you it was usefull

  • @gloriashen2671
    @gloriashen2671 3 роки тому

    Best explanation ever!~!!

  • @kianushmaleki
    @kianushmaleki 2 роки тому

    Does this non-Markovian system turns into a Markovian system if we let n -> Infinity ?

  • @montaarf1236
    @montaarf1236 4 місяці тому

    I think that there was something wrong in the video which is the initial vector or matrix as you write it verticaly and should be horizantal S(x y..)
    and the multiplication should be S* P^n not like in the video as the resulats are not the same.. and thank you for the video

  • @satyambhardwaj2289
    @satyambhardwaj2289 4 роки тому +1

    toast to the second part...

  • @jjlarochelle2523
    @jjlarochelle2523 Рік тому

    How do you find at what value n the S vector will have a given value for x1??

  • @aiswaryavijayan260
    @aiswaryavijayan260 3 роки тому +1

    Saved me👏

  • @BlackCodeMath
    @BlackCodeMath 9 місяців тому

    Beautiful.

  • @freedmoresidume
    @freedmoresidume 3 роки тому

    Incredible 🔥