Interactive Multimodal Learning for Venue Recommendation

Authors
Publication date 2015
Journal IEEE Transactions on Multimedia
Volume | Issue number 17 | 12
Pages (from-to) 2235-2244
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
In this paper, we propose City Melange, an interactive and multimodal content-based venue explorer. Our framework matches the interacting user to the users of social media platforms exhibiting similar taste. The data collection integrates location-based social networks such as Foursquare with general multimedia sharing platforms such as Flickr or Picasa. In City Melange, the user interacts with a set of images and thus implicitly with the underlying semantics. The semantic information is captured through convolutional deep net features in the visual domain and latent topics extracted using Latent Dirichlet allocation in the text domain. These are further clustered to provide representative user and venue topics. A linear SVM model learns the interacting user's preferences and determines similar users. The experiments show that our content-based approach outperforms the user-activity-based and popular vote baselines even from the early phases of interaction, while also being able to recommend mainstream venues to mainstream users and off-the-beaten-track venues to afficionados. City Melange is shown to be a well-performing venue exploration approach.
Document type Article
Language English
Published at https://doi.org/10.1109/TMM.2015.2480007
Permalink to this page
Back