WebSci'24: Keynote by Dirk Hovy

Поділитися
Вставка
  • Опубліковано 3 чер 2024
  • Dirk Hovy is a professor in the computing sciences department at Bocconi University in Milan, Italy. He is also the scientific director of the Data and Marketing insights research unit. Before that, he was faculty and postdoc in Copenhagen, got a PhD in NLP from USC, and a master’s degree in linguistics from Marburg, Germany. He is interested in the interaction between language, society, and machine learning, or what language can tell us about society and what computers can tell us about language. He has authored over 100 articles on these topics, including three best paper awards. He is also the author of two textbooks on using text analysis in Python for social science research (bit.ly/3dhaEQ7 and bit.ly/3sYiwMH). Dirk has organized one conference and several workshops (on abusive language, ethics in NLP, and computational social science and sociolinguistics).
    Can you guess how many AI models you interacted with today? Likely more than you realized: AI models manage our emails, traffic, hiring, search, and suggest shows to binge-watch. They are often difficult to detect - unless they act particularly human. AI models that code, paint, write, and play appear more human. But are they? And are we harming ourselves by humanizing these models?
    In this talk, I will discuss our common tendency to humanize AI models. Ascribing human characteristics makes unfamiliar objects more approachable and acceptable. In the digital age, we have begun to anthropomorphize AI models. But at what cost? Giving AI models human abilities they lack mixes fact and fiction and gives them powers they do not have. This results in exaggerated claims and missed obstacles, and it obscures weaknesses, leading to AI risks and misuse. Drawing on examples from physics, psychology, philosophy, and personal stories, we will discuss what models can do without human talent and their inability to do the daily intricacies we do without thinking. I’ll also discuss AI’s real threat: human prejudices and biases.
    Finally, I hope this talk has helps you better appreciate humanity. AI models may mirror human intellect and experience. They can help us realize our full potential and enhance society, but they are not like us! Recognizing this distinction is critical to responsible, ethical, and beneficial AI use.
  • Наука та технологія

КОМЕНТАРІ •