In this paper we describe our TRECVID 2009 video retrieval experiments. The MediaMill team participated in three tasks: concept
detection, automatic search, and interactive search. Starting point for the MediaMill concept detection approach is our top-performing
bag-of-words system of last year, which uses multiple color descriptors, codebooks with soft-assignment, and kernel-based
supervised learning. We improve upon this baseline system by exploring two novel research directions. Firstly, we study a
multi-modal extension by the inclusion of 20 audio concepts and fusing using two novel multi-kernel supervised learning methods.
Secondly, with the help of recently proposed algorithmic refinements of bag-of-words, a bag-of-words GPU implementation, and
compute clusters, we scale-up the amount of visual information analyzed by an order of magnitude, to a total of 1,000,000
i-frames. Our experiments evaluate the merit of these new components, ultimately leading to 64 robust concept detectors for
video retrieval. For retrieval, a robust but limited set of concept detectors necessitates the need to rely on as many auxiliary
information channels as possible. For automatic search we therefore explore how we can learn to rank various information channels
simultaneously to maximize video search results for a given topic. To improve the video retrieval results further, our interactive
search experiments investigate the roles of visualizing preview results for a certain browse-dimension and relevance feedback
mechanisms that learn to solve complex search topics by analysis from user browsing behavior. The 2009 edition of the TRECVID
benchmark has again been a fruitful participation for the MediaMill team, resulting in the top ranking for both concept detection
and interactive search.