For the algorithm, he hasn't incremented through the values of 'j' at all.If we compute this algorithm for any state other than the final state, the algorithm will never return. So there should've been an inner loop within the for loop to increment through all values of 'j'.
At 54 mins near the end, when dealing with the forward algorithm, professor doesn't iterate through all the js. Shouldn't he iterating through all the j within the middle states as any of those states can emit our observation. Thank you.
At 50:33, to calculate probability of being in invisible state j at time step t. Ie., alpha(j, t) calculation should not multiply with b(j, k). Would that not make calculation of alpha(i, t+1) wrong? Probability of emitting expected states(symbols) at each step should be separately tracked.
No. I stand corrected. Thinking little deeply into it. alpha(j, t) is not just probability of being in invisible state j. It is probability of being in invisible state j and also producing sequence of visible states (symbols) upto time step t.
I have a seismic signal of 11900 samples, given sampling rate is 1000 samples per sec. How to apply this model to this signal provided I have the complete understanding of HMM theory.
Mistake at 37:10. It should start from t=0 and not t=1 as shown by the lecturer. If t=1, then that is saying, the starting node does not emit anything.
the visible states/symbols are emitted only after a transition, so the machine does transition to w(1) at t=0, but the first visible state/symbol v(1) is emitted only after the first transition itself i.e at t=1. At t=0, the machine hasn't made any transitions yet, therefore no visible states/symbols are emitted. In other words the starting node will emit a symbol only when a transition is made to it from any other hidden state(including the state itself).
Well done Professor Biswas for breaking down a fairly complicated concept into bite-sized chunks.
Excellent video. This is my first peek into HMM.
Brilliant Lecture. Blown away by the meticulousness. Thank you, Sir.
Expect all IIT lectures to be like this one.
@@nahid862 here you go: ua-cam.com/video/G-77ZgsGjvA/v-deo.html
thanx a lot professor , it helps me a lot , i think studying in india is a good idea
Only if your are interest in hard core theory oriented subjects (like pure maths, theoretical physics).
Too much competition bro.. Getting into a good university is almost impossible in India
@@nikhiljahagirdar2724 depends on possibility you want to have
@@nikhiljahagirdar2724 maybe you are dumb.
BEST EXPLAINATION ON ENTIRE WEB
Thanks Prof. This is superb explanation.
very chilling video to watch. thanks Professor
Best lecture ever. Thank you Sir.
Excellent explanation sir...
For the algorithm, he hasn't incremented through the values of 'j' at all.If we compute this algorithm for any state other than the final state, the algorithm will never return. So there should've been an inner loop within the for loop to increment through all values of 'j'.
thank you very much, very good explanation.
At 54 mins near the end, when dealing with the forward algorithm, professor doesn't iterate through all the js. Shouldn't he iterating through all the j within the middle states as any of those states can emit our observation.
Thank you.
At 50:33, to calculate probability of being in invisible state j at time step t. Ie., alpha(j, t) calculation should not multiply with b(j, k). Would that not make calculation of alpha(i, t+1) wrong? Probability of emitting expected states(symbols) at each step should be separately tracked.
No. I stand corrected. Thinking little deeply into it. alpha(j, t) is not just probability of being in invisible state j. It is probability of being in invisible state j and also producing sequence of visible states (symbols) upto time step t.
love this tutorial, thank you so much
Thanks a lot Prof Biswas.
Too good explanation!
Great upload,
Thank you very much
I have a seismic signal of 11900 samples, given sampling rate is 1000 samples per sec. How to apply this model to this signal provided I have the complete understanding of HMM theory.
VERY GOOD SIR
Excellent
What is an accepting state ? and why do we need it ?
where automata (state machine stops), not related to HMM, though a related concept as HMM is also a state machine.
Final state
sir can u break this video for decoding and leaning problem separately
revising after 4 years. :)
excellent !
Mistake at 37:10. It should start from t=0 and not t=1 as shown by the lecturer. If t=1, then that is saying, the starting node does not emit anything.
There is no mistake. The state at t=0 does not emit any visible symbol. It transitions into a hidden state (only). So v(0)*w(0) doesn't make sense.
the visible states/symbols are emitted only after a transition, so the machine does transition to w(1) at t=0, but the first visible state/symbol v(1) is emitted only after the first transition itself i.e at t=1.
At t=0, the machine hasn't made any transitions yet, therefore no visible states/symbols are emitted. In other words the starting node will emit a symbol only when a transition is made to it from any other hidden state(including the state itself).
Brilliant
👌
Thanks
Had asked a doubt regarding the topic discussed , question is posted in the link below
cs.stackexchange.com/questions/109368/prior-probability-in-hmm
I did not understand.Can you elaborate that (0.5)^2 part
profesor may i kiss you please?
pahle khud ache se par