ICCV 2023 Tutorial: Open-World Learning: From Zero-Shot to Truly Open-World

Поділитися
Вставка
  • Опубліковано 19 лис 2024
  • ICCV 2023 Tutorial (October 2, 2023):
    Visual Recognition Beyond the Comfort Zone: Adapting to Unseen Concepts on the Fly
    Abstract:
    Visual recognition models often rely on a simple assumption: the training set contains all information needed to perform the target task. This assumption is violated in most practical applications since the number of semantic concepts and their compositions is too vast to be captured in a single training set, no matter its scale. To address these problems we can use two different strategies: the first is to prepare the model for unavailable semantic concepts (e.g., via zero-shot learning or transfer) and the second is to adapt the model on the fly, exploiting the stream of incoming data at deployment (e.g., via continual learning, open-world learning, or test-time training). The aim of this tutorial is to provide an introduction to these topics, describing different ways to learn models that may adapt/transfer to unseen (or partially available) semantic knowledge.
    Website (with PDFs of slides):
    sites.google.c...
    Schedule:
    Introduction (Why is it important to develop systems that go beyond the limited knowledge of their training set?)
    Dynamic Adaptation Methods:
    Test-Time Adaptation - Riccardo Volpi (NAVER LABS Europe, France)
    Open-World Learning: Lifelong & Active Learning - Tyler Hayes (NAVER LABS Europe, France)
    Open-World Learning: Open-Vocabulary Learning & Category Discovery - Zsolt Kira (Georgia Tech, USA)
    Static Adaptation Methods:
    Compositional Zero-Shot Learning - Massimiliano Mancini (University of Trento, Italy)
    Visual-Language Learning - Aishwarya Agrawal (University of Montreal, Mila, DeepMind, Canada)
    Putting it All Together: Adaptation at Large (When to use static versus dynamic adaptation methods? What are the pros and cons of each type?)

КОМЕНТАРІ •