Deep Neural Networks, Explanations, and Rationality

Поділитися
Вставка
  • Опубліковано 28 чер 2024
  • As AI increasingly becomes a part of our daily lives, its decisions can have far-reaching effects on humanity. Yet, the explanations for these decisions often leave us feeling puzzled and confused. Unlike the clear, logical reasoning that underlies human explanations, AI's justifications can seem opaque and difficult to understand. But what if we could train two AI systems to engage in a kind of duel, in which the outcome would be a human-like explanation for the AI's decisions? This is the promise of Generative Adversarial Networks, in which one system generates an explanation while the other determines whether that explanation was created by a machine or a human. The result is an explanation that is both intelligible to us and faithful to the workings of the AI.
    Reference links:
    1. Perspectives on Digital Humanism link.springer.com/content/pdf...
    To find out more, see the Nokia Bell Labs Responsible AI hub: www.bell-labs.com/research-in...
  • Наука та технологія

КОМЕНТАРІ •