Michael I. Jordan: An Alternative View on AI: Collaborative Learning, Incentives, and Social Welfare

Поділитися
Вставка
  • Опубліковано 13 чер 2024
  • Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. He received his Masters in Mathematics from Arizona State University, and earned his PhD in Cognitive Science in 1985 from the University of California, San Diego. He was a professor at MIT from 1988 to 1998. His research interests bridge the computational, statistical, cognitive, biological and social sciences.
    Abstract:
    Artificial intelligence (AI) has focused on a paradigm in which intelligence inheres in a single, autonomous agent. Social issues are entirely secondary in this paradigm. Indeed, the overall design of deployed AI systems is often naive---a centralized entity provides services to passive agents and reaps the rewards. Such a framing need not be the dominant paradigm for information technology. In a broader framing, agents are active, they are cooperative, their data is valuable, and they wish to obtain value from their participation in learning-based systems. Intelligence inheres as much in the overall system as it does in individual agents, be they humans or computers. This is a perspective familiar in economics, and a first goal in this line of work is to bring economics into contact with the computing and data sciences. The long-term goal is two-fold---to provide a broader conceptual foundation for emerging real-world AI systems, and to upend received wisdom in the computational, economic, and inferential disciplines.
  • Наука та технологія

КОМЕНТАРІ • 3

  • @tigranishkhanov9521
    @tigranishkhanov9521 8 місяців тому +6

    I always thought that ML is statistics + geometry done on the computer in high-dimensional spaces. Geometry comes in to help with high dimensionality. Instead of learning distributions exactly which is very hard in high dimensions we learn separating spaces of relatively simple geometry (like hyperplanes) as approximations.