1 - A Brief Introduction to Causal Inference (Course Preview)

Поділитися
Вставка
  • Опубліковано 21 вер 2024

КОМЕНТАРІ • 46

  • @gpranjan
    @gpranjan 5 місяців тому

    Amazing introduction to the subject. Explained the generic problem and an intuitive solution.
    I now feel inspired to read the rest of the units, and understand the details.

  • @lilyha2470
    @lilyha2470 3 роки тому +8

    Wonderful lecture, thanks. Glad I found you,

  • @duruphd
    @duruphd Рік тому +1

    This is fantastic. I have never seen Simpson's Paradox explained this way. Makes perfect sense to me now.

  • @GFedya
    @GFedya 3 роки тому

    Thank you so much! I think that's the point where I at least have started to understand causal inference.

  • @meosodeep8239
    @meosodeep8239 4 роки тому +1

    Thank you so much. This video and the course are super super useful for me. I have got stuck by applying machine learning on economic evaluation of health intervention. I wait for the rest of the course from day to day. Thank you again.

    • @BradyNealCausalInference
      @BradyNealCausalInference  4 роки тому

      I'm glad it will be useful! Only a few more days until I start uploading weekly course videos :)

  • @iirolenkkari9564
    @iirolenkkari9564 2 роки тому +6

    Great lecture and a great lecturer! I have read bit of Judea Pearl's Causality, but it's a bit much for me right now. My brain needs a somewhat more approachable study path, without being overly simplistic. This lecture series seems like a great resource.

  • @eleonoramisino7370
    @eleonoramisino7370 3 роки тому +1

    Brilliant lecture! Thank you very much!

  • @veenitaanand5913
    @veenitaanand5913 3 роки тому

    Great . Thank you. @25.08- 25.10 time, I got confused

  • @shubhpachchigar1457
    @shubhpachchigar1457 4 роки тому +1

    The course will be great but an assignment with each lecture would be much helpful!

    • @BradyNealCausalInference
      @BradyNealCausalInference  4 роки тому +3

      Unfortunately, I don't think I'll have time to produce full assignments while the course is running, but you'll see that there are several mini quizzes in the coming lectures and that there are many more questions in the book.

  • @charismaticaazim
    @charismaticaazim 3 роки тому +1

    At 35:39 you talk about controlling over adjustment variables. Is that another way of saying you're marginilizing over these adjustment variables?
    Thank you for sharing your understanding

  • @pambashatsonfasco1453
    @pambashatsonfasco1453 3 роки тому

    Just wonderful

  • @yusichou7365
    @yusichou7365 4 роки тому +1

    Good course. Thank you very much~~

  • @tyleryu6432
    @tyleryu6432 4 роки тому +1

    Thank you!

  • @ItIsEasyPeasy
    @ItIsEasyPeasy 2 роки тому +1

    Great lecture! I wonder if we assume the example of shoes in bed is observational (meaning we cannot ask people to wear shoes randomly), how your last few slides help to identify the cause? What is the intuition for solution in that case?

  • @Theviswanath57
    @Theviswanath57 4 роки тому +1

    So the process that running behind while "me associating my headaches to my in-laws presence" is motivated reasoning :).

  • @fairuzshadmanishishir8171
    @fairuzshadmanishishir8171 4 роки тому

    Great Content.
    Please upload the full series

  • @zhengyuwu
    @zhengyuwu 2 роки тому

    Well explained!

  • @nikolaizaicev9297
    @nikolaizaicev9297 3 роки тому

    Thank you for the explanation.
    I have read that part in Judeas Pearl book, but did not understood the part on confounding, conditional probability formula and random experiments right away. But I got the point after your video, well done!

  • @MsAnnVash
    @MsAnnVash 3 роки тому +4

    Hello!
    Thank you for just amazing and transparent introduction to causal inference. Another example which proves why it is important to lean probability theory ( bayes's formula is in place).
    I have a question. On the last slide you've got the values 12.9% and 19.4%. How to interpret these values?

    • @BradyNealCausalInference
      @BradyNealCausalInference  3 роки тому +6

      This means that taking treatment A gives the average person a 19.4% chance of survival and taking treatment B gives the average person a 12.9% chance of survival. Essentially, these causal quantities give you exactly what you want for decision making. The regular (non-causal) conditional expectations E[Y | t] give you information that isn't relevant for decision making.

    • @curtischee2532
      @curtischee2532 2 роки тому

      @@BradyNealCausalInference
      I am confused. I thought there is a 19.4% chance of dying if a person takes treatment A

    • @nicholasornstein6691
      @nicholasornstein6691 2 роки тому

      @@curtischee2532 yeah I'm pretty sure that's what he meant. These percentages (i.e., the outcome Y) are the percentage of people that die in the given circumstances (i.e., in the Condition=Mild, Treatment=B cell, only 10% died).

  • @ליאורזלזניאק
    @ליאורזלזניאק Рік тому

    In the screen of second 29:47, I don't understood how the inequality in the first line make sense, cause Y(1) is defined as the random varibale Y|T=1 and the same with 0.

  • @amins6695
    @amins6695 2 роки тому

    As a general question, how is it possible to infer the causal structure based on the observational data? In the examples, it is assumed that we know the structure of the graph. However, in reality, we may just know how many factors may be important but not the structure of the graph.

  • @aberasategi
    @aberasategi 4 роки тому +1

    On Simpson's Paradox, scenario 2, where treatment is a cause of condition, there is no confounding association?

    • @BradyNealCausalInference
      @BradyNealCausalInference  4 роки тому +3

      That's right, because C is not a common cause (confounder) of T and Y in that scenario.
      In that scenario, causation flows along both the path T --> Y and the path T --> C --> Y. And we want to measure the total causation, so we want to not block the causation that flows along either of those paths. More on this in week 3.

  • @hustlw
    @hustlw Рік тому

    Thanks a lot. But I'm a little confused about where the causal graph on page 27 (time 39:17) comes from. Is it just an assumption? My biggest trouble is that I can't find a way to build a causal graph before doing any analysis.

  • @utsabkhakurel9742
    @utsabkhakurel9742 11 місяців тому

    Why assign more scarce Treatment B for more severe patients? Shouldn't severe patients receive Treatment A?

  • @aazimlakhani8250
    @aazimlakhani8250 3 роки тому

    Could you please expand the formula on the last slides to derive how u got the values.

  • @Theviswanath57
    @Theviswanath57 4 роки тому +1

    On Simpson's Paradox: What are the some general guidelines to know whether we are in Scenario 1 or Scenario 2;

    • @BradyNealCausalInference
      @BradyNealCausalInference  4 роки тому +3

      Great question! The easiest answer for current practice is probably "get a subject matter expert to tell you."
      But people in machine learning don't like that answer. We want to learn everything from the data. Unfortunately, these two structures imply the same set of statistical independencies, so we cannot know which scenario we are in from just observational data! Couple ways to get around this:
      1. Run experiments to see which variable changes which variable.
      2. Make some assumptions about the specific functional form.
      3. Ask a subject matter expert.
      We'll see #1 and #2 when we get to the causal discovery portion of the course.

  • @qweas0
    @qweas0 3 роки тому

    Thank you for this series. Can you provide the slides in PowerPoint format? The pdf format repeats the slides because of the animations

  • @xuanzhao5594
    @xuanzhao5594 3 роки тому

    3:08 I like the COVID-27 example so much. However I think we are too naive if we only try to minimize the death cases... I just hope the real world can also treat this problem like it in the video, no politics involved lol.

  • @binxu3763
    @binxu3763 4 роки тому

    Hi, the content is so interesting! And I wonder if I can ask you that in scenario 2 of the COVID-27 example, is the condition(mild or severe) diagnosed at the time they receive treatment B instead of being prescribed?

    • @BradyNealCausalInference
      @BradyNealCausalInference  4 роки тому +2

      You are exactly right. Condition at the time of actually RECEIVING treatment, as opposed to condition at the time of treatment prescription.

  • @andreplima1
    @andreplima1 3 роки тому

    Loved the Covid-27 hypothetical scenario lol

  • @arjunvs6805
    @arjunvs6805 4 роки тому

    Hi ,
    Thank you making this amazing content, However, i have this question in my mind
    is confounding good or bad? In your example you explained that drinking causes head ache and I believe that is a confounding. Actually it provides more insight to the data right?
    Please share your thought and correct me if i am wrong.
    PS:
    I am a newbie

    • @BradyNealCausalInference
      @BradyNealCausalInference  4 роки тому +1

      Confounding is neutral in the sense that is a description of part of how the data is generated. Often it means that there is some common cause of the treatment and the outcome. However, confounding is bad in the sense that it makes it hard to measure causation. We can only measure associational quantities (e.g. means, conditional means, covariances, etc.). Confounding makes it so that we must be much more careful in figuring out causal effects. If there were no confounding, then everything would be simple; E[Y(t)] would equal E[Y | T = t]. Hopefully, this will become more clear in the coming weeks (by week 3).

    • @arjunvs6805
      @arjunvs6805 4 роки тому

      @@BradyNealCausalInference , Now its perfect. Thank you so much!

  • @rasraster
    @rasraster 2 роки тому

    Covid-27? NOOOOOOOOO!