Tubelet-Contrastive Self-Supervision for Video-Efficient Generalization

Open Access
Authors
Publication date 2023
Book title 2023 IEEE/CVF International Conference on Computer Vision
Book subtitle ICCV 2023 : Paris, France, 2-6 October 2023 : proceedings
ISBN
  • 9798350307191
ISBN (electronic)
  • 9798350307184
Event 2023 IEEE/CVF International Conference on Computer Vision (ICCV)
Pages (from-to) 13766-13777
Publisher Los Alamitos, California: IEEE Computer Society
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
We propose a self-supervised method for learning motion-focused video representations. Existing approaches minimize distances between temporally augmented videos, which maintain high spatial similarity. We instead propose to learn similarities between videos with identical local motion dynamics but an otherwise different appearance. We do so by adding synthetic motion trajectories to videos which we refer to as tubelets. By simulating different tubelet motions and applying transformations, such as scaling and rotation, we introduce motion patterns beyond what is present in the pretraining data. This allows us to learn a video representation that is remarkably data efficient: our approach maintains performance when using only 25% of the pretraining videos. Experiments on 10 diverse downstream settings demonstrate our competitive performance and generalizability to new domains and fine-grained actions. Code is available at https://github.com/fmthoker/tubelet-contrast.
Document type Conference contribution
Note With supplemental file
Language English
Published at https://doi.org/10.48550/arXiv.2303.11003 https://doi.org/10.1109/ICCV51070.2023.01270
Published at https://openaccess.thecvf.com/content/ICCV2023/html/Thoker_Tubelet-Contrastive_Self-Supervision_for_Video-Efficient_Generalization_ICCV_2023_paper.html
Other links https://github.com/fmthoker/tubelet-contrast https://www.proceedings.com/72328.html
Downloads
Supplementary materials
Permalink to this page
Back