Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation (ECCV 2024)

Поділитися
Вставка
  • Опубліковано 18 вер 2024
  • Project page: github.com/tub...
    PDF: arxiv.org/pdf/...
    Dataset (20GB): drive.google.c...
    Current optical flow and point-tracking methods rely heavily on synthetic datasets. Event cameras are novel vision sensors with advantages in challenging visual conditions, but state-of-the-art frame-based methods cannot be easily adapted to event data due to the limitations of current event simulators. We introduce a novel self-supervised loss combining the Contrast Maximization framework with a non-linear motion prior in the form of pixel-level trajectories and propose an efficient solution to solve the high-dimensional assignment problem between non-linear trajectories and events. Their effectiveness is demonstrated in two scenarios: In dense continuous-time motion estimation, our method improves the zero-shot performance of a synthetically trained model on the real-world dataset EVIMO2 by 29%. In optical flow estimation, our method elevates a simple UNet to achieve state-of-the-art performance among self-supervised methods on the DSEC optical flow benchmark.
    Reference:
    F. Hamann, Z. Wang, I. Asmanis, K. Chaney, G. Gallego, K. Daniilidis,
    Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation,
    European Conference on Computer Vision (ECCV), 2024.
    Affiliations:
    Technical University of Berlin (Berlin, Germany),
    University of Pennsylvania (PA, USA),
    Robotics Institute Germany (RIG), www.robotics-i...
    Science of Intelligence Excellence Cluster (Berlin, Germany), www.scienceofi...
    Einstein Center Digital Future (Berlin, Germany), www.digital-fu...
    Event-based Vision:
    - Research: sites.google.c...
    - Survey paper: arxiv.org/abs/...
    - Course at TU Berlin: sites.google.c...
  • Наука та технологія

КОМЕНТАРІ •