Clean Label Poisoning Attacks: from Classification to Speech Recognition

Поділитися
Вставка
  • Опубліковано 12 бер 2024
  • Presented by Henry Li Xinyuan.
    In this video, we delve into the fascinating realm of poisoning attacks and defenses within the field of speech recognition, featuring insights from a collaborative effort with experts like Thomas Thebaud, Sonal Joshi, Martin Sustek, and others at CLSP. We begin with an introduction to adversarial attacks, showcasing how they can manipulate neural networks into misinterpreting data, from visual images to audio commands. The focus then shifts to the concept of poisoning attacks, a newer threat model where adversaries manipulate training data to compromise model integrity. Through engaging explanations and examples, we explore different strategies for these attacks, including dirty and clean label attacks, and present innovative defense mechanisms like DINO-based cluster-and-filter defenses. This video is a must-watch for anyone interested in cybersecurity, machine learning, and the ongoing battle between AI advancements and adversarial threats. Join us as we unpack these complex topics and discuss potential defenses, the efficacy of various strategies, and future research directions. Your thoughts, suggestions, and questions are highly encouraged!

КОМЕНТАРІ •