Automatic generation of dense non-rigid optical flow

Open Access
Authors
Publication date 11-2021
Journal Computer Vision and Image Understanding
Article number 103274
Volume | Issue number 212
Number of pages 8
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
There hardly exists any large-scale datasets with dense optical flow of non-rigid motion from real-world imagery as of today. The reason lies mainly in the required setup to derive ground truth optical flows: a series of images with known camera poses along its trajectory, and an accurate 3D model from a textured scene. Human annotation is not only too tedious for large databases, it can simply hardly contribute to accurate optical flow. To circumvent the need for manual annotation, we propose a framework to automatically generate optical flow from real-world videos. The method extracts and matches objects from video frames to compute initial constraints, and applies a deformation over the objects of interest to obtain dense optical flow fields. We propose several ways to augment the optical flow variations. Extensive experimental results show that training on our automatically generated optical flow outperforms methods that are trained on rigid synthetic data using FlowNet-S, LiteFlowNet, PWC-Net, and RAFT. Datasets and implementation of our optical flow generation framework are released at https://github.com/lhoangan/arap_flow.
Document type Article
Note With supplementary data
Language English
Published at https://doi.org/10.1016/j.cviu.2021.103274
Other links https://github.com/lhoangan/arap_flow
Downloads
1-s2.0-S1077314221001181-main (Final published version)
Supplementary materials
Permalink to this page
Back