Dora WalkingTours Dataset (ICLR 2024)

Contributors
  • Shashanka Venkataramanan
  • Mamshad Nayeem Rizve
  • Joao Carreira
  • Yannis Avrithis
Publication date 13-02-2024
Description
Self-supervised learning has unlocked the potential of scaling up pretraining to billions of images, since annotation is unnecessary. But are we making the best use of data? How more economical can we be? In this work, we attempt to answer this question by making two contributions. First, we investigate first-person videos and introduce a "Walking Tours" dataset. These videos are high-resolution, hours-long, captured in a single uninterrupted take, depicting a large number of objects and actions with natural scene transitions. They are unlabeled and uncurated, thus realistic for self-supervision and comparable with human learning.Second, we introduce a novel self-supervised image pretraining method tailored for learning from continuous videos.Reference:Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video. Shashanka Venkataramanan, Mamshad Nayeem Rizve, João Carreira, Yuki M. Asano, Yannis Avrithis. In: International Conference on Learning Representations 2024
Publisher Universiteit van Amsterdam
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Document type Dataset
DOI https://doi.org/10.21942/uva.25189275.v1
Permalink to this page
Back