HUJI Machine Learning Club
HUJI Machine Learning Club
  • 71
  • 11 352
Nir Rosenfeld - Classification Under Strategic Self-Selection
Presented on Thursday, July 11th, 2024, 10:30 AM, room B220
Speaker
Nir Rosenfeld (Technion)
Title
Classification Under Strategic Self-Selection
Abstract:
The growing success of machine learning across a wide range of domains and applications has made it appealing to use it also as a tool for informing decisions about humans, and in which human users are the target of prediction. But humans are not your conventional input: they have goals, beliefs, and aspirations, and often take action to promote their own self-interests. One such action is participation, namely users' decisions of whether to at all take part in the process - be it job applications, university admissions, loan requests, or welfare programs. Since who participates in these is likely to depend on the learned decision rule, learning becomes susceptible to self-selection - a common though easily overlooked source of bias that can significantly affect learning outcomes.
Focusing on resume screening as an example task, I will present a learning setting in which the choice of classifier has influence over which candidates apply, and which do not. From a learning perspective, this becomes a problem of model-induced distribution shift, where the challenge is that each classifier ‘turns on’ or ‘turns off’ different parts of the data distribution, and for which we propose a differential optimization framework. From a policy perspective, we show that while conventional learning can lead to arbitrary outcomes, strategic learning (which anticipates user behavior) has the capacity to almost fully determine the composition of the applying sub-population. This has concrete implications on social outcomes which require us to rethink the meaning of equity, the role of affirmative action, and the need for regulation in learning.
Bio:
Nir Rosenfeld is an assistant professor of Computer Science at the Technion, where he is head of the Behavioral Machine Learning lab, working on problems at the intersection of machine learning and human behavior. Before joining the Technion he was a postdoc at Harvard's School of Engineering and Applied Sciences (SEAS), where he was a member of the EconCS group, a fellow of the Center for Research on Computation and Society (CRCS), and a fellow of the Harvard Data Science Initiative (HDSI). He holds a BSc in Computer Science and Psychology and an MSc and PhD in Computer Science, all from the Hebrew University.
Переглядів: 145

Відео

Tal Gordon - Hessian vs. Gauss-Newton Matrix: An Analytic Study
Переглядів 1242 місяці тому
We apologize for the slightly distorted audio. Presented on Thursday, July 4th, 2024, 10:30 AM, room B220 Speaker Tal Gordon (HUJI) Title Hessian vs. Gauss-Newton Matrix: An Analytic Study Abstract: An outstanding question in the theory of deep learning concerns the exact nature of the associated highly non-convex loss landscapes which may account for the success of gradient-based methods. This...
Ido Greenberg - Real-World AI: Risk and Robustness in Reinforcement Learning and Kalman Filtering
Переглядів 1772 місяці тому
Presented on Thursday, June 27th, 2024, 10:30 AM, room B220 Speaker Ido Greenberg (Technion) Title Real-World AI: Risk and Robustness in Reinforcement Learning and Kalman Filtering Abstract: Real-world applications of reinforcement learning (RL) are often sensitive to risks and uncertainties. Robustness to risk and uncertainty is often addressed by optimization of a risk measure of the returns,...
Hilla Schefler - A unified characterization of private learning
Переглядів 1392 місяці тому
Presented on Thursday, June 13th, 2024, 10:30 AM, room B220 Speaker Hilla Schefler (Technion) Title A unified characterization of private learning Abstract: Differential Privacy (DP) is a mathematical framework for ensuring the privacy of individuals in a dataset. Roughly speaking, it guarantees that privacy is protected in data analysis by ensuring that the output of an analysis does not revea...
Idan Attias - Information Complexity of Stochastic Convex Optimization:
Переглядів 3883 місяці тому
Presented on Thursday, May 16th, 2024, 10:30 AM, room B220 Speaker Idan Attias (BGU) Title Information Complexity of Stochastic Convex Optimization: Applications to Generalization, Memorization and Privacy Abstract: We investigate the interplay between memorization and learning in the context of stochasticconvex optimization (SCO). We define memorization via the information a learning algorithm...
Alon Peled-Cohen - Rate-Optimal Policy Optimization for Linear Markov Decision Processes
Переглядів 833 місяці тому
Presented Thursday, May 9th, 2024, 10:30 AM, room B220 Speaker Alon Peled-Cohen (TAU) Title Rate-Optimal Policy Optimization for Linear Markov Decision Processes Abstract: In this work we study regret minimization in online episodic linear Markov Decision Processes. We propose a policy optimization algorithm that is computationally efficient and obtains rate optimal O(K^{1/2}) regret, where K d...
Anatoly Khina - Geometry-oriented Measures of Dependence
Переглядів 1966 місяців тому
Presented on Thursday, February 29th, 2024, 10:30 AM, room C221 Speaker Anatoly Khina (TAU) Title Geometry-oriented Measures of Dependence Abstract: One of the fundamental problems of statistics and data science is identifying and measuring dependence. This problem dates back to the works of Bravais, Galton, and Pearson in the 18th century on dependence measure design, and to the work of Rényi ...
Idan Mehalel - Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension
Переглядів 766 місяців тому
Presented on Thursday, February 15th, 2024, 10:30 AM, room C221 Speaker Idan Mehalel (Technion) Title Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension Abstract: Consider the following two classical online learning problems: (1) Suppose that n forecasters are providing daily rain/no-rain predictions, and the best among them is mistaken in at most k many days. For how m...
Or Sharir - Efficiency in the Age of Large Scale Models
Переглядів 556 місяців тому
Presented on Thursday, February 8th, 2024, 10:30 AM, room C221 Speaker Or Sharir (HUJI) Title Efficiency in the Age of Large Scale Models Abstract: Since the early days of the “deep learning revolution”, circa 2012, there has been a clear push to scale up models with the hope of reaching better performance. The machine learning community went from training models in size of the order of million...
Dan Vilenchik - Towards Reverse Algorithmic Engineering of Neural Networks
Переглядів 3257 місяців тому
Presented on Thursday, February 1st, 2024, 10:30 AM, room C221 Speaker Dan Vilenchik (BGU) Title Towards Reverse Algorithmic Engineering of Neural Networks Abstract: As machine learning models get more complex, they can outperform traditional algorithms and tackle a broader range of problems, including challenging combinatorial optimization tasks.However, this increased complexity can make unde...
Itay Evron - Continual learning in linear regression and classification
Переглядів 1167 місяців тому
Presented on Thursday, January 18th, 2024, 10:30 AM, room C221 Speaker Itay Evron (Technion) Title Continual learning in linear regression and classification Abstract: Continual learning deals with learning settings where distributions change over time, thereby challenging traditional i.i.d. assumptions. Unfortunately, models tend to forget previous expertise upon fitting new tasks. To gain a b...
Kobbi Nissim - PEP talk
Переглядів 857 місяців тому
Presented on Thursday, January 11th, 2024, 10:30 AM, room C221 Speaker Kobbi Nissim (Georgetown, Google) Title PEP talk Abstract: Learning is one of the most important tasks applied to data. When applied to sensitive personal data, preserving a strong notion of privacy such as differential privacy is desirable. A rich line of study has revealed significant theoretical and practical gaps between...
Ido Nachum - A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
Переглядів 808 місяців тому
Presented on Thursday, January 4th, 2024, 10:30 AM, room C221 Speaker Ido Nachum (EPFL, Haifa university) Title A Johnson Lindenstrauss Framework for Randomly Initialized CNNs Abstract: Fix a dataset $\{ x_i \}_{i=1}^n \subset R^d$. The celebrated Johnson Lindenstrauss (JL) lemma shows that a random projection preserves geometry: with high probability, $(x_i,x_j) \approx (W \cdot x_i , W \cdot ...
Dan Rosenbaum - Functa: data as neural fields
Переглядів 331Рік тому
Presented on Thursday, June 8th, 2023, 10:30 AM, room B220 Speaker Dan Rosenbaum (Haifa university) Title Functa: data as neural fields Abstract: It is common practice in deep learning to represent a measurement of the world on a discrete grid, e.g. a 2D grid of pixels. However, the underlying signal represented by these measurements is often continuous, e.g. the scene depicted in an image. A p...
Ron Teichner - Identifying internal control and regulation in biological systems
Переглядів 66Рік тому
Presented on Thursday, May 11th, 2023, 10:30 AM, room B220 Speaker Ron Teichner (Technion) Title Identifying internal control and regulation in biological systems Abstract: Homeostasis, the ability to maintain a relatively constant internal environment in the face of perturbations, is a hallmark of biological systems. It is believed that this constancy is achieved through multiple internal regu...
Noam Razin - On the Ability of Graph Neural Networks to Model Interactions Between Vertices
Переглядів 128Рік тому
Noam Razin - On the Ability of Graph Neural Networks to Model Interactions Between Vertices
Martin Haardt, Joseph K. Chege - Maximum Likelihood Estimation of a Low-Rank Probability Mass Tensor
Переглядів 143Рік тому
Martin Haardt, Joseph K. Chege - Maximum Likelihood Estimation of a Low-Rank Probability Mass Tensor
Boris Shustin - Tractable Riemannian Optimization via Random Preconditioning and Manifold Learning
Переглядів 125Рік тому
Boris Shustin - Tractable Riemannian Optimization via Random Preconditioning and Manifold Learning
Amit Attia - Stability and Acceleration
Переглядів 99Рік тому
Amit Attia - Stability and Acceleration
Yoel Shkolnisky - Manifold denoising with application to electron microscopy
Переглядів 92Рік тому
Yoel Shkolnisky - Manifold denoising with application to electron microscopy
Gal Vardi - Implications of the implicit bias in ReLU networks
Переглядів 107Рік тому
Gal Vardi - Implications of the implicit bias in ReLU networks
Boaz Barak - Empirical Challenges to Theories of Deep Learning
Переглядів 625Рік тому
Boaz Barak - Empirical Challenges to Theories of Deep Learning
Nir Weinberger - Mean Estimation in High-Dimensional Binary Markov Gaussian Mixture Models
Переглядів 76Рік тому
Nir Weinberger - Mean Estimation in High-Dimensional Binary Markov Gaussian Mixture Models
Aryeh Kontorovich - Local Glivenko-Cantelli (or: estimating the mean in infinite dimensions)
Переглядів 264Рік тому
Aryeh Kontorovich - Local Glivenko-Cantelli (or: estimating the mean in infinite dimensions)
Moshe Shechner - Adversarial streaming: A survey
Переглядів 115Рік тому
Moshe Shechner - Adversarial streaming: A survey
Gilad Yehudai - On the benefits of deep and narrow neural networks
Переглядів 115Рік тому
Gilad Yehudai - On the benefits of deep and narrow neural networks
Paz Fink - Gauss-Legendre Features for Scalable Gaussian Process Regression
Переглядів 141Рік тому
Paz Fink - Gauss-Legendre Features for Scalable Gaussian Process Regression
Ibrahim Jubran - Coresets and Decision Trees
Переглядів 120Рік тому
Ibrahim Jubran - Coresets and Decision Trees
Tal Lancewicki - Delay and Cooperation in Reinforcement Learning
Переглядів 87Рік тому
Tal Lancewicki - Delay and Cooperation in Reinforcement Learning
Elad Granot - Generalization Bounds for Neural Networks via Approximate Description Length
Переглядів 1832 роки тому
Elad Granot - Generalization Bounds for Neural Networks via Approximate Description Length

КОМЕНТАРІ

  • @undisclosedmusic4969
    @undisclosedmusic4969 Місяць тому

    Is there a written report on this? Super interesting

  • @RajivSambasivan
    @RajivSambasivan 3 місяці тому

    Super cool idea, but really beautifully explained. Super

  • @KarlVonBismark
    @KarlVonBismark 6 місяців тому

    ¿?

  • @tobiasarndt5640
    @tobiasarndt5640 Рік тому

    really great talk and great explainer. thank you man!

  • @mareikemuller5255
    @mareikemuller5255 Рік тому

    ☹️ P r o m o S M

  • @bangaruraju2508
    @bangaruraju2508 Рік тому

    How can u classify stance of the new text given by new user?

  • @kallolroy5029
    @kallolroy5029 Рік тому

    Awesome

  • @mateuszokonski4318
    @mateuszokonski4318 Рік тому

    Thank you and best regards from Poland 🙂

  • @AoibhinnMcCarthy
    @AoibhinnMcCarthy 2 роки тому

    Great talk.

  • @bollosomjack2091
    @bollosomjack2091 2 роки тому

    Can the pdf courseware be made public?please

  • @bobbig7788
    @bobbig7788 2 роки тому

    Thanks to Ravid and HUJI Machine Learning Club for sharing the info. Please advise if the slides is available online. Thank you.

  • @kennethdoyle469
    @kennethdoyle469 2 роки тому

    Thanks for sharing!! Your content deserves a service like P R O M O S M!

  • @user-hh3nx4ds5b
    @user-hh3nx4ds5b 2 роки тому

    About the expressivity of two head self attention vs one Its probably mistake since the dependence in dimension is linear not exponential so double the parameters double the expressivity in each of the cases and no real benefit was made

  • @arnebovarne7759
    @arnebovarne7759 3 роки тому

    Very nice about neural networks (CNN's) and their sometimes stupid mistakes