Multimodal explanations: Justifying decisions and pointing to the evidence

Authors
  • D.H. Park
  • L.A. Hendricks
  • Z. Akata ORCID logo
  • A. Rohrbach
  • B. Schiele
  • T. Darrell
  • M. Rohrbach
Publication date 2018
Book title 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Book subtitle proceedings : 18-22 June 2018, Salt Lake City, Utah
ISBN
  • 9781538664216
ISBN (electronic)
  • 9781538664209
Event 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Pages (from-to) 8779-8788
Publisher Los Alamitos, California: IEEE Computer Society
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Deep models that are both effective and explainable are desirable in many settings; prior explainable models have been unimodal, offering either image-based visualization of attention weights or text-based generation of post-hoc justifications. We propose a multimodal approach to explanation, and argue that the two modalities provide complementary explanatory strengths. We collect two new datasets to define and evaluate this task, and propose a novel model which can provide joint textual rationale generation and attention visualization. Our datasets define visual and textual justifications of a classification decision for activity recognition tasks (ACT-X) and for visual question answering tasks (VQA-X). We quantitatively show that training with the textual explanations not only yields better textual justification models, but also better localizes the evidence that supports the decision. We also qualitatively show cases where visual explanation is more insightful than textual explanation, and vice versa, supporting our thesis that multimodal explanation models offer significant benefits over unimodal approaches.
Document type Conference contribution
Language English
Published at https://doi.org/10.1109/CVPR.2018.00915
Permalink to this page
Back