Beyond Words: Exploring Cultural Value Sensitivity in Multimodal Models
| Authors |
|
|---|---|
| Publication date | 2025 |
| Host editors |
|
| Book title | Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics : Proceedings of the Conference : Findings |
| Book subtitle | NAACL 2025 : April 29-May 4, 2025 |
| ISBN (electronic) |
|
| Event | 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics, NAACL 2025 |
| Pages (from-to) | 7607–7623 |
| Publisher | Kerrville, TX: Association for Computational Linguistics |
| Organisations |
|
| Abstract |
Investigating value alignment in Large Language Models (LLMs) based on cultural context has become a critical area of research. However, similar biases have not been extensively explored in large vision-language models (VLMs). As the scale of multimodal models continues to grow, it becomes increasingly important to assess whether images can serve as reliable proxies for culture and how these values are embedded through the integration of both visual and textual data. In this paper, we conduct a thorough evaluation of multimodal model at different scales, focusing on their alignment with cultural values. Our findings reveal that, much like LLMs, VLMs exhibit sensitivity to cultural values, but their performance in aligning with these values is highly context-dependent. While VLMs show potential in improving value understanding through the use of images, this alignment varies significantly across contexts highlighting the complexities and underexplored challenges in the alignment of multimodal models. |
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.18653/v1/2025.findings-naacl.422 |
| Downloads |
2025.findings-naacl.422
(Final published version)
|
| Permalink to this page | |