Spatial-Temporal Omni-Scale Feature Learning for Person Re-Identification

Authors
Publication date 2020
Book title IWBF 2020
Book subtitle 2020 8th International Workshop on Biometrics and Forensics (IWBF) : proceedings : Porto, Portugal, April 29-30, 2020
ISBN
  • 9781728162331
ISBN (electronic)
  • 9781728162324
Event 8th International Workshop on Biometrics and Forensics
Pages (from-to) 121-125
Number of pages 5
Publisher Piscataway, NJ: IEEE
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
State-of-the-art person re-identification (ReID) models use Convolutional Neural Networks (CNN) for feature extraction and comparison. Often these models fail to recognize all the intra- and inter-class variations that emerge in person ReID, making it harder to discriminate between data subjects. In this paper we seek to reduce these problems and improve performance by combining two state-of-the-art models. We use the Omni-Scale Network (OSNet) as our CNN to test the Market1501 and DukeMTMC-ReID datasets for person ReID. To fully utilize the potential of these datasets, we apply the spatialtemporal constraint which extracts the camera ID and timestamp from each image to form a distribution. We combine these two methods to create a hybrid model titled Spatial-Temporal OmniScale Network (st-OSNet). Our model attains a Rank-1 (R1) accuracy of 98.2% and mean average precision (mAP) of 92.7% for the Market1501 dataset. For the DukeMTMC-reID dataset our model achieves 94.3% R1 and 86.1% mAP, hereby surpassing the results of OSNet by a large margin for both datasets (94.3%, 86.4%, 88.4%, 76.1%, respectively).
Document type Conference contribution
Language English
Published at https://doi.org/10.1109/IWBF49977.2020.9107966
Permalink to this page
Back