pyGAM: balancing interpretability and predictive power using... - Dani Servén Marín

Поділитися
Вставка
  • Опубліковано 1 лип 2024
  • PyData Berlin 2018
    With nonlinear models it is difficult to find a balance between predictive power and interpretability. How does feature A affect the output y? How will the model extrapolate? Generalized Additive Models are flexible and interpretable, with great implementations in R, but few options in the Python universe. pyGAM is a new open source library that offers to fill this gap.
    Slides: github.com/dswah/PyData-Berli...
    ---
    www.pydata.org
    PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.
    PyData conferences aim to be accessible and community-driven, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases. 00:00 Welcome!
    00:10 Help us add time stamps or captions to this video! See the description for details.
    Want to help add timestamps to our UA-cam videos to help with discoverability? Find out more here: github.com/numfocus/UA-camVi...
  • Наука та технологія

КОМЕНТАРІ • 5

  • @masonholcombe3327
    @masonholcombe3327 6 днів тому

    smoothing having a similar closed form solution as ridge regression is so satisfying

  • @rajavelks6861
    @rajavelks6861 Рік тому +2

    I'm eager to see how PyGAM is going to be helpful for my projects (having some unordered categorical predictors), given that it has been heavily inspired by mgcv package.
    Thanks a lot, Dani for bringing in a relief for Python users.

  • @TAP7a
    @TAP7a 2 роки тому +4

    Honestly I would put decision trees as even more interpretable than linear models, but only a little more predictive. You can put a decision tree rendering in front of a lay person and I think most would be able to accurately interpret what it means, whereas I remember from high school that probably 30-50% of my classmates found y = mx + c to be absolutely terrifying

  • @guynyamsi7729
    @guynyamsi7729 Місяць тому

    Hello, and thank you for this package.
    I wanted to add an offset to my model, but I realize that it's not possible. An offset can be seen as a linear predictor with a variable having a fixed coefficient of 1 (g(mu) = f(x1) +x2 ) : x2 is an offset. Please, is it possible to fix the value of the coefficient in a linear term? For example, l(1, coef_estimate=1)? In this case, variable 1 (x2) will behave like an offset.