Thank you for the simple explanation. I was stuck in a research paper with a causal graph diagram which did not include much information. This video saved my day
7:56 Hey, I recognize that. It's Sugeno fuzzy-logic! Makes sense, since we are talking about inference. I been wondering how one could mix graphs and fuzzy logic, but wasn't exactly looking for it. What a nice surprise.
In the graph there is no connection between I and EH. The only arrow pointing to EH was from Dem. So I didn't understand where from the term P(EH|Dem,I,CC) came. It should have been P(EH|Dem) only, right from the start, as you eventually arrived at using the Markov asssumption. But I didn't get what happened prior to the assumption. The prior probabilities at each node must only arrive from the incoming links and hence the parents (this is the Markov assumption, essentially). What am I missing?
The P(EH | I, Dem, CC) is first shown in the section of the video where I ignore the causal graph structure. In this section, I was just using the definition of conditional probability to decompose a joint probability like P(A, B, C, D, E) into P(A) x P(B | A) x P(C | B, A) x P(D |C, B, A) x P(E | D, C, B, A). I wanted to show this first to motivate the idea that the causal Markov assumption can make this decomposition more concise--which is why we can just have P(EH | Dem) and not P(EH | Dem, I, CC).
Thank you for the simple explanation. I was stuck in a research paper with a causal graph diagram which did not include much information. This video saved my day
7:56 Hey, I recognize that. It's Sugeno fuzzy-logic! Makes sense, since we are talking about inference. I been wondering how one could mix graphs and fuzzy logic, but wasn't exactly looking for it. What a nice surprise.
Absolutely wonderful. Thank you
Thank you for posting such a clear and useful explanation
In the graph there is no connection between I and EH. The only arrow pointing to EH was from Dem. So I didn't understand where from the term P(EH|Dem,I,CC) came.
It should have been P(EH|Dem) only, right from the start, as you eventually arrived at using the Markov asssumption.
But I didn't get what happened prior to the assumption.
The prior probabilities at each node must only arrive from the incoming links and hence the parents (this is the Markov assumption, essentially).
What am I missing?
The P(EH | I, Dem, CC) is first shown in the section of the video where I ignore the causal graph structure. In this section, I was just using the definition of conditional probability to decompose a joint probability like P(A, B, C, D, E) into P(A) x P(B | A) x P(C | B, A) x P(D |C, B, A) x P(E | D, C, B, A). I wanted to show this first to motivate the idea that the causal Markov assumption can make this decomposition more concise--which is why we can just have P(EH | Dem) and not P(EH | Dem, I, CC).
@@lesliemyint1865 Got it. So the initial one was the naive assumption. Crystal clear.
Your teaching is really good. Thank you
👍👍👍