Not (yet) the whole story: Evaluating Visual Storytelling Requires More than Measuring Coherence, Grounding, and Repetition
| Authors | |
|---|---|
| Publication date | 2024 |
| Host editors |
|
| Book title | The 2024 Conference on Empirical Methods in Natural Language Processing : Findings of EMNLP 2024 |
| Book subtitle | EMNLP 2024 : November 12-16, 2024 |
| ISBN (electronic) |
|
| Event | 2024 Conference on Empirical Methods in Natural Language Processing |
| Pages (from-to) | 11597–11611 |
| Publisher | Kerrville, TX: Association for Computational Linguistics |
| Organisations |
|
| Abstract |
Visual storytelling consists in generating a natural language story given a temporally ordered sequence of images. This task is not only challenging for models, but also very difficult to evaluate with automatic metrics since there is no consensus about what makes a story ‘good’. In this paper, we introduce a novel method that measures story quality in terms of human likeness regarding three key aspects highlighted in previous work: visual grounding, coherence, and repetitiveness. We then use this method to evaluate the stories generated by several models, showing that the foundation model LLaVA obtains the best result, but only slightly so compared to TAPM, a 50-times smaller visual storytelling model. Upgrading the visual and language components of TAPM results in a model that yields competitive performance with a relatively low number of parameters. Finally, we carry out a human evaluation study, whose results suggest that a ‘good’ story may require more than a human-like level of visual grounding, coherence, and repetition.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.18653/v1/2024.findings-emnlp.679 |
| Downloads |
2024.findings-emnlp.679v2
(Final published version)
|
| Permalink to this page | |
