Manipulating and Measuring Model Interpretability

Поділитися
Вставка
  • Опубліковано 11 вер 2024
  • Forough Poursabzi, Researcher, Microsoft Research
    Presented at MLconf 2018
    Abstract: Machine learning is increasingly used to make decisions that affect people’s lives in critical domains like criminal justice, fair lending, and medicine. While most of the research in machine learning focuses on improving the performance of models on held-out datasets, this is seldom enough to convince end-users that these models are trustworthy and reliable in the wild. To address this problem, a new line of research has emerged that focuses on developing interpretable machine learning methods and helping end-users make informed decisions. Despite the growing body of work in developing interpretable models, there is still no consensus on the definition and quantification of interpretability. In this talk, I will argue that to understand interpretability, we need to bring humans in the loop and run human-subject experiments. I approach the problem of interpretability from an interdisciplinary perspective which builds on decades of research in psychology, cognitive science, and social science to understand human behavior and trust. I will talk about a set of controlled user experiments, where we manipulated various design factors in models that are commonly thought to make them more or less interpretable and measured their influence on users’ behavior. Our findings emphasize the importance of studying how models are presented to people and empirically verifying that interpretable models achieve their intended effects on end-users.
    See Forough's's presentation slides on our slideshare page here: www.slideshare...

КОМЕНТАРІ •