Lecture 10 - Knowledge Distillation | MIT 6.S965

Поділитися
Вставка
  • Опубліковано 13 жов 2022
  • Lecture 10 introduces knowledge distillation, including self and online distillation, distillation for different tasks. This lecture also introduces network augmentation, a training technique for tiny machine-learning models.
    Keywords: Knowledge Distillation, Online Distillation, Self Distillation, Network Augmentation
    Slides: efficientml.ai/schedule/
    --------------------------------------------------------------------------------------
    TinyML and Efficient Deep Learning Computing
    Instructors:
    Song Han: songhan.mit.edu
    Have you found it difficult to deploy neural networks on mobile devices and IoT devices? Have you ever found it too slow to train neural networks? This course is a deep dive into efficient machine learning techniques that enable powerful deep learning applications on resource-constrained devices. Topics cover efficient inference techniques, including model compression, pruning, quantization, neural architecture search, and distillation; and efficient training techniques, including gradient compression and on-device transfer learning; followed by application-specific model optimization techniques for videos, point cloud, and NLP; and efficient quantum machine learning. Students will get hands-on experience implementing deep learning applications on microcontrollers, mobile phones, and quantum machines with an open-ended design project related to mobile AI.
    Website:
    efficientml.ai/
  • Розваги

КОМЕНТАРІ •