Evaluating Multimedia Features and Fusion for Example-Based Event Detection

Open Access
Authors
  • G.K. Myers
  • R. Nallapati
  • J. van Hout
  • S. Pancoast
Publication date 2014
Journal Machine Vision and Applications
Volume | Issue number 25 | 1
Pages (from-to) 17-32
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Multimedia event detection (MED) is a challenging problem because of the heterogeneous content and variable quality found in large collections of Internet videos. To study the value of multimedia features and fusion for representing and learning events from a set of example video clips, we created SESAME, a system for video SEarch with Speed and Accuracy for Multimedia Events. SESAME includes multiple bag-of-words event classifiers based on single data types: low-level visual, motion, and audio features; high-level semantic visual concepts; and automatic speech recognition. Event detection performance was evaluated for each event classifier. The performance of low-level visual and motion features was improved by the use of difference coding. The accuracy of the visual concepts was nearly as strong as that of the low-level visual features. Experiments with a number of fusion methods for combining the event detection scores from these classifiers revealed that simple fusion methods, such as arithmetic mean, perform as well as or better than other, more complex fusion methods. SESAME’s performance in the 2012 TRECVID MED evaluation was one of the best reported.
Document type Article
Language English
Published at https://doi.org/10.1007/s00138-013-0527-8
Downloads
Permalink to this page
Back