Amazing introduction to the subject. Explained the generic problem and an intuitive solution. I now feel inspired to read the rest of the units, and understand the details.
Thank you so much. This video and the course are super super useful for me. I have got stuck by applying machine learning on economic evaluation of health intervention. I wait for the rest of the course from day to day. Thank you again.
Great lecture and a great lecturer! I have read bit of Judea Pearl's Causality, but it's a bit much for me right now. My brain needs a somewhat more approachable study path, without being overly simplistic. This lecture series seems like a great resource.
Unfortunately, I don't think I'll have time to produce full assignments while the course is running, but you'll see that there are several mini quizzes in the coming lectures and that there are many more questions in the book.
At 35:39 you talk about controlling over adjustment variables. Is that another way of saying you're marginilizing over these adjustment variables? Thank you for sharing your understanding
Great lecture! I wonder if we assume the example of shoes in bed is observational (meaning we cannot ask people to wear shoes randomly), how your last few slides help to identify the cause? What is the intuition for solution in that case?
Thank you for the explanation. I have read that part in Judeas Pearl book, but did not understood the part on confounding, conditional probability formula and random experiments right away. But I got the point after your video, well done!
Hello! Thank you for just amazing and transparent introduction to causal inference. Another example which proves why it is important to lean probability theory ( bayes's formula is in place). I have a question. On the last slide you've got the values 12.9% and 19.4%. How to interpret these values?
This means that taking treatment A gives the average person a 19.4% chance of survival and taking treatment B gives the average person a 12.9% chance of survival. Essentially, these causal quantities give you exactly what you want for decision making. The regular (non-causal) conditional expectations E[Y | t] give you information that isn't relevant for decision making.
@@curtischee2532 yeah I'm pretty sure that's what he meant. These percentages (i.e., the outcome Y) are the percentage of people that die in the given circumstances (i.e., in the Condition=Mild, Treatment=B cell, only 10% died).
In the screen of second 29:47, I don't understood how the inequality in the first line make sense, cause Y(1) is defined as the random varibale Y|T=1 and the same with 0.
As a general question, how is it possible to infer the causal structure based on the observational data? In the examples, it is assumed that we know the structure of the graph. However, in reality, we may just know how many factors may be important but not the structure of the graph.
That's right, because C is not a common cause (confounder) of T and Y in that scenario. In that scenario, causation flows along both the path T --> Y and the path T --> C --> Y. And we want to measure the total causation, so we want to not block the causation that flows along either of those paths. More on this in week 3.
Thanks a lot. But I'm a little confused about where the causal graph on page 27 (time 39:17) comes from. Is it just an assumption? My biggest trouble is that I can't find a way to build a causal graph before doing any analysis.
Great question! The easiest answer for current practice is probably "get a subject matter expert to tell you." But people in machine learning don't like that answer. We want to learn everything from the data. Unfortunately, these two structures imply the same set of statistical independencies, so we cannot know which scenario we are in from just observational data! Couple ways to get around this: 1. Run experiments to see which variable changes which variable. 2. Make some assumptions about the specific functional form. 3. Ask a subject matter expert. We'll see #1 and #2 when we get to the causal discovery portion of the course.
3:08 I like the COVID-27 example so much. However I think we are too naive if we only try to minimize the death cases... I just hope the real world can also treat this problem like it in the video, no politics involved lol.
Hi, the content is so interesting! And I wonder if I can ask you that in scenario 2 of the COVID-27 example, is the condition(mild or severe) diagnosed at the time they receive treatment B instead of being prescribed?
Hi , Thank you making this amazing content, However, i have this question in my mind is confounding good or bad? In your example you explained that drinking causes head ache and I believe that is a confounding. Actually it provides more insight to the data right? Please share your thought and correct me if i am wrong. PS: I am a newbie
Confounding is neutral in the sense that is a description of part of how the data is generated. Often it means that there is some common cause of the treatment and the outcome. However, confounding is bad in the sense that it makes it hard to measure causation. We can only measure associational quantities (e.g. means, conditional means, covariances, etc.). Confounding makes it so that we must be much more careful in figuring out causal effects. If there were no confounding, then everything would be simple; E[Y(t)] would equal E[Y | T = t]. Hopefully, this will become more clear in the coming weeks (by week 3).
Amazing introduction to the subject. Explained the generic problem and an intuitive solution.
I now feel inspired to read the rest of the units, and understand the details.
Wonderful lecture, thanks. Glad I found you,
This is fantastic. I have never seen Simpson's Paradox explained this way. Makes perfect sense to me now.
Thank you so much! I think that's the point where I at least have started to understand causal inference.
Thank you so much. This video and the course are super super useful for me. I have got stuck by applying machine learning on economic evaluation of health intervention. I wait for the rest of the course from day to day. Thank you again.
I'm glad it will be useful! Only a few more days until I start uploading weekly course videos :)
Great lecture and a great lecturer! I have read bit of Judea Pearl's Causality, but it's a bit much for me right now. My brain needs a somewhat more approachable study path, without being overly simplistic. This lecture series seems like a great resource.
Brilliant lecture! Thank you very much!
Great . Thank you. @25.08- 25.10 time, I got confused
The course will be great but an assignment with each lecture would be much helpful!
Unfortunately, I don't think I'll have time to produce full assignments while the course is running, but you'll see that there are several mini quizzes in the coming lectures and that there are many more questions in the book.
At 35:39 you talk about controlling over adjustment variables. Is that another way of saying you're marginilizing over these adjustment variables?
Thank you for sharing your understanding
Just wonderful
Good course. Thank you very much~~
Thank you!
Great lecture! I wonder if we assume the example of shoes in bed is observational (meaning we cannot ask people to wear shoes randomly), how your last few slides help to identify the cause? What is the intuition for solution in that case?
So the process that running behind while "me associating my headaches to my in-laws presence" is motivated reasoning :).
;)
Great Content.
Please upload the full series
Thanks! Starting August 31, it will come out one week at a time, following the schedule on causalcourse.com
@@BradyNealCausalInference Thanks
Well explained!
Thank you for the explanation.
I have read that part in Judeas Pearl book, but did not understood the part on confounding, conditional probability formula and random experiments right away. But I got the point after your video, well done!
Hello!
Thank you for just amazing and transparent introduction to causal inference. Another example which proves why it is important to lean probability theory ( bayes's formula is in place).
I have a question. On the last slide you've got the values 12.9% and 19.4%. How to interpret these values?
This means that taking treatment A gives the average person a 19.4% chance of survival and taking treatment B gives the average person a 12.9% chance of survival. Essentially, these causal quantities give you exactly what you want for decision making. The regular (non-causal) conditional expectations E[Y | t] give you information that isn't relevant for decision making.
@@BradyNealCausalInference
I am confused. I thought there is a 19.4% chance of dying if a person takes treatment A
@@curtischee2532 yeah I'm pretty sure that's what he meant. These percentages (i.e., the outcome Y) are the percentage of people that die in the given circumstances (i.e., in the Condition=Mild, Treatment=B cell, only 10% died).
In the screen of second 29:47, I don't understood how the inequality in the first line make sense, cause Y(1) is defined as the random varibale Y|T=1 and the same with 0.
As a general question, how is it possible to infer the causal structure based on the observational data? In the examples, it is assumed that we know the structure of the graph. However, in reality, we may just know how many factors may be important but not the structure of the graph.
On Simpson's Paradox, scenario 2, where treatment is a cause of condition, there is no confounding association?
That's right, because C is not a common cause (confounder) of T and Y in that scenario.
In that scenario, causation flows along both the path T --> Y and the path T --> C --> Y. And we want to measure the total causation, so we want to not block the causation that flows along either of those paths. More on this in week 3.
Thanks a lot. But I'm a little confused about where the causal graph on page 27 (time 39:17) comes from. Is it just an assumption? My biggest trouble is that I can't find a way to build a causal graph before doing any analysis.
Why assign more scarce Treatment B for more severe patients? Shouldn't severe patients receive Treatment A?
Could you please expand the formula on the last slides to derive how u got the values.
On Simpson's Paradox: What are the some general guidelines to know whether we are in Scenario 1 or Scenario 2;
Great question! The easiest answer for current practice is probably "get a subject matter expert to tell you."
But people in machine learning don't like that answer. We want to learn everything from the data. Unfortunately, these two structures imply the same set of statistical independencies, so we cannot know which scenario we are in from just observational data! Couple ways to get around this:
1. Run experiments to see which variable changes which variable.
2. Make some assumptions about the specific functional form.
3. Ask a subject matter expert.
We'll see #1 and #2 when we get to the causal discovery portion of the course.
Thank you for this series. Can you provide the slides in PowerPoint format? The pdf format repeats the slides because of the animations
3:08 I like the COVID-27 example so much. However I think we are too naive if we only try to minimize the death cases... I just hope the real world can also treat this problem like it in the video, no politics involved lol.
Hi, the content is so interesting! And I wonder if I can ask you that in scenario 2 of the COVID-27 example, is the condition(mild or severe) diagnosed at the time they receive treatment B instead of being prescribed?
You are exactly right. Condition at the time of actually RECEIVING treatment, as opposed to condition at the time of treatment prescription.
Loved the Covid-27 hypothetical scenario lol
Hehe SARS-CoV-3
Hi ,
Thank you making this amazing content, However, i have this question in my mind
is confounding good or bad? In your example you explained that drinking causes head ache and I believe that is a confounding. Actually it provides more insight to the data right?
Please share your thought and correct me if i am wrong.
PS:
I am a newbie
Confounding is neutral in the sense that is a description of part of how the data is generated. Often it means that there is some common cause of the treatment and the outcome. However, confounding is bad in the sense that it makes it hard to measure causation. We can only measure associational quantities (e.g. means, conditional means, covariances, etc.). Confounding makes it so that we must be much more careful in figuring out causal effects. If there were no confounding, then everything would be simple; E[Y(t)] would equal E[Y | T = t]. Hopefully, this will become more clear in the coming weeks (by week 3).
@@BradyNealCausalInference , Now its perfect. Thank you so much!
Covid-27? NOOOOOOOOO!