Part-1 Adversarial robustness in Neural Networks, Quantization and working at DeepMind | David Stutz

Поділитися
Вставка
  • Опубліковано 24 тра 2024
  • Part-1 of my podcast with David Stutz. (Part-2: • Working at DeepMind, I... )
    David is a research scientist at DeepMind working on building robust and safe deep learning models. Prior to joining DeepMind he was a PhD student at Max Plank Institute of Informatics. He also maintains a fantastic blog on various topics related to machine learning and graduate life which is insightful to young researchers out there.
    Check-out Rora: teamrora.com/jayshah
    Guide to STEM PhD AI Researcher + Research Scientist pay: www.teamrora.com/post/ai-rese...
    00:00:00 Highlights and Sponsors
    00:01:22 Intro
    00:02:14 Interest in AI
    00:12:26 Finding research interests
    00:22:41 Robustness vs Generalization in deep neural networks
    00:28:03 Generalization vs model performance trade-off
    00:37:30 On-manifold adversarial examples for better generalization
    00:48:20 Vision transformers
    00:49:45 Confidence calibrated adversarial training
    00:59:25 Improving hardware architecture for deep neural networks
    01:08:45 What's the tradeoff in quantization?
    01:19:07 Amazing aspects of working at DeepMind
    01:27:38 Learning the skills of Abstraction when collaborating
    David's Homepage: davidstutz.de/
    And his blog: davidstutz.de/category/blog/
    Research work: scholar.google.com/citations?...
    About the Host:
    Jay is a PhD student at Arizona State University.
    Linkedin: / shahjay22
    Twitter: / jaygshah22
    Homepage: www.public.asu.edu/~jgshah1/ for any queries.
    Stay tuned for upcoming webinars!
    **Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.**

КОМЕНТАРІ •