MediaMill at TRECVID 2013: Searching Concepts, Objects, Instances and Events in Video
| Authors |
|
|---|---|
| Publication date | 11-2013 |
| Event | TRECVID 2013 Workshop |
| Number of pages | 6 |
| Organisations |
|
| Abstract |
In this paper we summarize our TRECVID 2013 [15] video retrieval experiments. The MediaMill team participated in four tasks: concept detection, object localization, in-stance search, and event recognition. For all tasks the starting point is our top-performing bag-of-words system of TRECVID 2008-2012, which uses color SIFT descrip-tors, average and diļ¬erence coded into codebooks with spa-tial pyramids and kernel-based machine learning. New this year are concept detection with deep learning, concept detec-tion without annotations, object localization using selective search, instance search by reranking, and event recognition based on concept vocabularies. Our experiments focus on es-tablishing the video retrieval value of the innovations. The 2013 edition of the TRECVID benchmark has again been a fruitful participation for the MediaMill team, resulting in the best result for concept detection, concept detection with-out annotation, object localization, concept pair detection, and visual event recognition with few examples.
|
| Document type | Paper |
| Language | English |
| Published at | https://www-nlpir.nist.gov/projects/tvpubs/tv13.papers/mediamill.pdf |
| Other links | http://www.science.uva.nl/research/publications/2013/SnoekPTRECVID2013 |
| Downloads |
mediamill (1)
(Final published version)
|
| Permalink to this page | |
