World in a Frame: Understanding Culture Mixing as a New Challenge for Vision-Language Models
Abstract
LVLMs struggle with preserving cultural identities in mixed visual scenes, but supervised fine-tuning with culture mixing datasets improves their performance.
In a globalized world, cultural elements from diverse origins frequently appear together within a single visual scene. We refer to these as culture mixing scenarios, yet how Large Vision-Language Models (LVLMs) perceive them remains underexplored. We investigate culture mixing as a critical challenge for LVLMs and examine how current models behave when cultural items from multiple regions appear together. To systematically analyze these behaviors, we construct CultureMix, a food Visual Question Answering (VQA) benchmark with 23k diffusion-generated, human-verified culture mixing images across four subtasks: (1) food-only, (2) food+food, (3) food+background, and (4) food+food+background. Evaluating 10 LVLMs, we find consistent failures to preserve individual cultural identities in mixed settings. Models show strong background reliance, with accuracy dropping 14% when cultural backgrounds are added to food-only baselines, and they produce inconsistent predictions for identical foods across different contexts. To address these limitations, we explore three robustness strategies. We find supervised fine-tuning using a diverse culture mixing dataset substantially improve model consistency and reduce background sensitivity. We call for increased attention to culture mixing scenarios as a critical step toward developing LVLMs capable of operating reliably in culturally diverse real-world environments.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Vision Language Models are Confused Tourists (2025)
- BLEnD-Vis: Benchmarking Multimodal Cultural Understanding in Vision Language Models (2025)
- Exposing Blindspots: Cultural Bias Evaluation in Generative Image Models (2025)
- Where Culture Fades: Revealing the Cultural Gap in Text-to-Image Generation (2025)
- Culture in Action: Evaluating Text-to-Image Models through Social Activities (2025)
- MMA-ASIA: A Multilingual and Multimodal Alignment Framework for Culturally-Grounded Evaluation (2025)
- EverydayMMQA: A Multilingual and Multimodal Framework for Culturally Grounded Spoken Visual QA (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
