Markov Chains: Recurrence, Irreducibility, Classes | Part - 2

Поділитися
Вставка
  • Опубліковано 16 гру 2024

КОМЕНТАРІ • 116

  • @abhishekarora4007
    @abhishekarora4007 3 роки тому +22

    why this video has views only on thousands? it needs to be in millions!

  • @yiyiyan7273
    @yiyiyan7273 3 роки тому +26

    This is really nice for the beginners to understand the basic properties of markov chain. It would be great if your video could go further to the hidden markov chain and factorial markov chain:)

  • @jayeshpatil5112
    @jayeshpatil5112 10 місяців тому +2

    Can't believe that Indian is at it's prime. Ek number explanation 🔥🔥🔥

  • @nicolasrodrigo9
    @nicolasrodrigo9 2 роки тому +10

    You are a very good math professor, thanks a lot!

  • @ianbowen6344
    @ianbowen6344 4 роки тому +13

    5:46 - "Between any of these classes, we can always go from one state to the other." But how can we do that if two of the classes are self-contained? Do you mean that we can always move between states within each class?

    • @NormalizedNerd
      @NormalizedNerd  4 роки тому +12

      "we can always move between states within each class" This is what I meant.

    • @zenchiassassin283
      @zenchiassassin283 3 роки тому +2

      @@NormalizedNerd thanks

    • @张雨-t6n
      @张雨-t6n 15 днів тому

      I also think it is a bit hard to understand why it can be called a communication class when 1 cannot reach 0.

  • @iglesiaszorro297
    @iglesiaszorro297 3 роки тому +8

    Very catchy! I request you to make more such videos on markov chains with these kinds of awesome representations!! Markov chains were a dread to me previously.. your videos are too cool!

  • @nujranujranujra
    @nujranujranujra 4 роки тому +133

    Great to see high-quality educational channels like 3Blue1Brown coming from India. Btw, what software do you use to create the animations?

    • @NormalizedNerd
      @NormalizedNerd  4 роки тому +110

      It's a python library named manim, created by Grant Sanderson!

    • @abhirajarora7631
      @abhirajarora7631 3 місяці тому +1

      Are you sure about that comparison?

    • @NikethNath
      @NikethNath 26 днів тому +2

      @@abhirajarora7631i mean grant is sanderson is 3b1b, so it's bound to be similar

    • @suryanshvarshney111
      @suryanshvarshney111 25 днів тому

      @@abhirajarora7631 Normalised Nerd will reach that level in future dw

  • @Mithu14062
    @Mithu14062 3 роки тому +4

    Very good precised explanation with nice animation. Thank you for your video. Please make more for solving numericals and implementation of practical scenario.

  • @georgemavran9701
    @georgemavran9701 Рік тому +4

    Amazing explanation! Can you also please explain the periodicity of a state in a Markov chain?

  • @real.biswajit
    @real.biswajit 2 роки тому +1

    Your videos are really helpful dada❤

  • @tristanlouthrobins
    @tristanlouthrobins 5 місяців тому

    Absolutely brilliant, clear explanation!

  • @harishsuthar4604
    @harishsuthar4604 3 роки тому +2

    Looks like Stat Quest Channel BAM!!!
    Clearly Explained!!!

  • @wonseoklee80
    @wonseoklee80 2 роки тому

    Thanks for the video. Now I can understand whenever I hear Markov chain!

  • @olesiaaltynbaeva4132
    @olesiaaltynbaeva4132 3 роки тому +2

    Your channel is a great resource! Thanks!

  • @AnonymousAnonymous-ug8tp
    @AnonymousAnonymous-ug8tp Рік тому +2

    2:48 Sir, how come state 2 is recurrent state? It is possible that after reaching state 1, it keeps on looping back to state 1 forever, it is not "bound" to come back to state 2 from 1.

    • @alewis7041
      @alewis7041 Рік тому

      Recurrent state just means that after going from state to state infinitely, you will reach a giving state also infinitely. Generally, for very large numbers, 2 will be reached. 0, if we ran the transitions infinitely, would have a finite occurrence, a specific amount before it left state 0 and unable to return

    • @davethesid8960
      @davethesid8960 Рік тому

      No, because recurrence at 1 isn't with probability 1. So, provided you wait long enough, you will eventually leave state 1.

  • @Garrick645
    @Garrick645 4 місяці тому +2

    Bro we need more videos. Don't wait for comments just do it 🙏🙏❤❤

  • @LouisKahnIII
    @LouisKahnIII 8 днів тому

    This is excellent info well presented. Thank Yoyu

  • @amarparajuli692
    @amarparajuli692 3 роки тому +2

    Amazing content for ML and Data Science people. Keep up Bro. Will share it with my ML comrades.

  • @kirananumalla
    @kirananumalla 4 роки тому +2

    Very clearly explained! Yes would be useful if there are more videos..

  • @amritayushman3443
    @amritayushman3443 Рік тому +1

    Thanks for the videos. Helped me a lot. Would appreciate if you upload a video for complete in depth mathematical analysis of the Marco chain and its stationary probability.

  • @nid8490
    @nid8490 2 роки тому +2

    At @2:36 : I beg to differ. There is a non-zero probability that once I go from State 2 to State 1; I would continue to be in State 1 forever. In this case, we are not *bound * to come back to State 2 ever again. So I wouldn't say the probability of ever coming back to State 2 from State 2 is *1*.
    (Or am I missing something here?)

    • @mohamedaminekhadhraoui6417
      @mohamedaminekhadhraoui6417 7 місяців тому

      There isn’t a probability we’ll stay at state 1 forever. We can go from state 1 to state 1 again once twice or a billion times but we will come back to state 2 eventually.

  • @kindykomal
    @kindykomal 2 роки тому +1

    Why don't our teachers teach like this , was hating maths few mins ago, till I turned this video ,Thank you so for this much-needed video 🥺, Now I kinda want to do PhD instead in this 😂🙏🏻

  • @karannchew2534
    @karannchew2534 2 роки тому +1

    Notes for my future revision.
    *New Terminologies*
    Transient states.
    Recurrence state.
    Reducible Markov chain.
    Irreducible Markov chain.
    Communicating Classes.

  • @Realstranger69
    @Realstranger69 Рік тому +2

    Hello, dumb question. Shouldn't state 2be transient also. I mean, there is a extremely small chance (but not zero), that in a random walk we go from state 2 to state 1 and then we keep looping through state 1 forever, hence not coming back to state 2? No? Thanks love your vids.

  • @jingyingsophie8822
    @jingyingsophie8822 Рік тому +3

    I don't quite understand the part where 2 is also a recurrent state in the first example. If the definition of the recurrent state is where the probability of returning back to that state is =1 (i.e. guaranteed), wouldn't 2 be a transient state since there is the possible case where 1 goes back to itself only ad infinitum?

    • @dariovaccaro9401
      @dariovaccaro9401 Рік тому +2

      Yes that s true, I think he doesn't define well enough the two different cases

  • @melissachen1581
    @melissachen1581 3 роки тому +1

    I think there is a mistake at 2:56? 2 is not a recurrent state because after we leave 2, the chance of going back to 2 is less than 1 when 1 recurse itself. Only 1 is a recurrent state because after we leave 1, it's 100% that we will come back to 1. Can someone confirm that?

    • @Mosil0
      @Mosil0 2 роки тому

      I was thinking the same thing, but I suppose if you consider an infinite number of steps, eventually the probability of going back to 2 approaches 100%

  • @sushmitagoswami2033
    @sushmitagoswami2033 8 місяців тому

    Love the explaination!

  • @cassidygonzalez374
    @cassidygonzalez374 4 роки тому

    Love your videos! Very clearly explained

  • @niccolosimonato1478
    @niccolosimonato1478 3 роки тому +1

    Damn that's a smooth explaination

  • @willbutplural
    @willbutplural 2 роки тому

    Amazing video again 👍

  • @stivenap156
    @stivenap156 3 роки тому

    I am now a fan! New subscriber !

  • @さくら-z4y3k
    @さくら-z4y3k 14 днів тому

    Thank you so much

  • @丁珊珊-t4o
    @丁珊珊-t4o 3 роки тому

    wow this kind of random walk demo is very helpful

  • @preritgoyal9293
    @preritgoyal9293 8 місяців тому

    Great brother 👌👌
    So, if the stationary distribution has all non zero values, the chain will be irreducible ?
    (Since all states can communicate with each other)
    And Reducible if any of the states has 0 value in stationary distribution ?

  • @zahraheydari172
    @zahraheydari172 2 роки тому +1

    Thank you for your channel and all your videos. I had a question watching this video: How does this relate to the definition of Markov chain which you provided in part one which said the probability of the future state only depends on the current state?

  • @yijingwang7308
    @yijingwang7308 Рік тому

    Thank you for your video. But I am confused, you said Sum of Outgoing Probabilities Equals 1, but in the first example, the sum of outgoing probabilities of state 0 is less than 1?

  • @lebzgold7475
    @lebzgold7475 3 роки тому

    Amazing animation! Thank you.

  • @OmerMan992
    @OmerMan992 3 роки тому +1

    Great videos!
    Would you consider making video/s on Queueing theory for stochastic models please?

  • @arafathossain1803
    @arafathossain1803 2 роки тому

    Great one

  • @SuiLamSin
    @SuiLamSin 7 місяців тому

    very good video

  • @Frog-c5y
    @Frog-c5y 4 дні тому

    Is there a video on No U-Turn Sampler (NUTS)? Thanks

  • @llss79
    @llss79 3 роки тому

    You could have explained why what is the utility of simplifying markov chains into irreducible and what is the math difference when considering them separated.

  • @williammoody1911
    @williammoody1911 3 роки тому

    Love the videos. Can't wait to get you to 100k subs!

  • @مصطفىعبدالجبارجداح

    Thanks

  • @ayushshekhar1901
    @ayushshekhar1901 Рік тому

    Good presentation but I have a doubt in the end. How can we go from any state to any other state after transformation to similar states?

  • @muhammadrivandra5065
    @muhammadrivandra5065 4 роки тому

    Subscribed, awesome stuff dude

  • @anushaganesanpmp7602
    @anushaganesanpmp7602 4 роки тому

    please upload more in detail for properties and applications

  • @SARKARSAIMAISLAM
    @SARKARSAIMAISLAM Рік тому

    gr8 vdo...
    class 1(state 0 ) and class 3 (state 3)...cant communicate with others, how are they communicative classes???

  • @736939
    @736939 3 роки тому +1

    Basically these are the strongest connected components.

    • @NormalizedNerd
      @NormalizedNerd  3 роки тому

      Right you are...strongly connected components

  • @sumitlahiri209
    @sumitlahiri209 4 роки тому

    Fantastic !!

  • @johnmandrake8829
    @johnmandrake8829 4 роки тому

    yes more please.

  • @webdeveloper-vy7hb
    @webdeveloper-vy7hb 3 роки тому

    How did you use Manim to represent the random walk by blinking effect? Could you share the portion of that code? I started learning manim recently but couldn't manage to do that.

    • @NormalizedNerd
      @NormalizedNerd  3 роки тому

      I created a custom manim object to create the graphs (markov chains). Then I'm just walking through the vertices and edges. The blinking effect is just creating a circle and fading it immediately.

    • @webdeveloper-vy7hb
      @webdeveloper-vy7hb 3 роки тому

      @@NormalizedNerd I see. It will be great if you could share the custom object codes.

  • @asthaagha9505
    @asthaagha9505 Рік тому

    🥺🥺🥺thanq

  • @daniekpo
    @daniekpo Рік тому

    Great video. Just one observation; state 1 is NOT recurrent. A state cannot be recurrent and transient at the same time. The probability of never visiting state 0 again is greater than 0 so by definition it can't be recurrent. To be recurrent all paths leading out of the state has to eventually lead back to that state but that's no the case for state 0. I'm I missing something?

  • @ahlemchouial4621
    @ahlemchouial4621 3 роки тому

    thank yo u so much, amazing videos!!!

  • @arvinpradhan
    @arvinpradhan 4 роки тому +1

    discrete time markov chains and continuous time markov chains please

  • @geethanarvadi
    @geethanarvadi Рік тому

    If we have state space {0,1,2,3}
    And given Matrix then how to find the pij(n)? Please explain this 😢

  • @c0d23
    @c0d23 2 роки тому

    ¿What books to learn statistics, prob and markov chain?

    • @NormalizedNerd
      @NormalizedNerd  2 роки тому

      Element of Statistical Learning (Springer)
      Markov Chains by J.R. Norris

  • @mohammedbelgoumri
    @mohammedbelgoumri 2 роки тому

    Great video, is the source code available somewhere?

  • @MrFelco
    @MrFelco 9 місяців тому

    Hang on, if you define transient state as 'the probably of a state returning to itself is less than 1', then in the first example, would state 2 not also be a transient state? Reason being, there could be a random walk, in which you go from state 2 to state 1, and then state 1 keeps looping back on itself infinitely, never going back to state 2. Then the probability of state 2 returning to itself is less than 1, given there is a random walk in which it does not return to itself.

    • @mohamedaminekhadhraoui6417
      @mohamedaminekhadhraoui6417 7 місяців тому

      The probability of state 1 returning to itself infinitely is 0. It is bound to return to 2 at some point.

    • @mohamedaminekhadhraoui6417
      @mohamedaminekhadhraoui6417 7 місяців тому

      In all random walks that go on forever, we will go back to 2 if we start there.

  • @kaushalgagan6723
    @kaushalgagan6723 4 роки тому

    More 🤩....

  • @dareenoudeh4485
    @dareenoudeh4485 3 роки тому

    you are awsome

  • @migratingperson1165
    @migratingperson1165 Рік тому

    Found this math concept from Numb3rs and got curious

  • @SJ23982398
    @SJ23982398 3 роки тому

    I will be honest, was ready to find another video when heard the Indian accent. But then saw high upvote/downvote and stayed, and don't regret it!

  • @PsynideNeel
    @PsynideNeel 4 роки тому

    Facecam kobe asbe?

  • @flyguggenheim
    @flyguggenheim 4 місяці тому

    i think it's heal my light depression, thank you

  • @arounderror3747
    @arounderror3747 Рік тому

    osu?

  • @DejiAdegbite
    @DejiAdegbite 5 місяців тому

    No wonder it's called the Gambler's Ruin. 🤣

  • @laodrofotic7713
    @laodrofotic7713 2 роки тому

    I paused the video @1:00 minute mark to tell you it is NOT DUCKING GOOD TO REFER TO STATE A B AND C WHILE THE F-ING PICTURE SAYS STATE 1 2 and 3. FFS, ok now I will watch the rest of it but I think this will be a waste of time just from this start, I can tell you cant explain crap.

    • @lorinx7255
      @lorinx7255 2 місяці тому

      A and B are definition variables, like generalized variables you find in books so you can use it in any example.

  • @tsunningwah3471
    @tsunningwah3471 3 роки тому

    i love you

  • @prakashraj4519
    @prakashraj4519 2 роки тому +3

    Add some music