--- language: - vi - en license: mit task_categories: - image-text-to-text tags: - multimodal - vietnamese - exam - education - vlm - benchmark - question-answering - low-resource configs: - config_name: default data_files: - split: full_vqa path: data/full_vqa-* - split: random_subset_vqa path: data/random_subset_vqa-* - split: random_subset_ocr path: data/random_subset_ocr-* - split: cropped_random_subset_vqa path: data/cropped_random_subset_vqa-* - split: cropped_random_subset_vqa_description path: data/cropped_random_subset_vqa_description-* dataset_info: features: - name: image dtype: image - name: ID dtype: string - name: image_path dtype: string - name: en_prompt dtype: string - name: vn_prompt dtype: string - name: ground_truth dtype: string - name: subject dtype: string - name: multiple_question dtype: bool splits: - name: full_vqa num_bytes: 744184639.456 num_examples: 2548 - name: random_subset_vqa num_bytes: 128246826 num_examples: 210 - name: random_subset_ocr num_bytes: 128350896 num_examples: 210 - name: cropped_random_subset_vqa num_bytes: 54505685 num_examples: 210 - name: cropped_random_subset_vqa_description num_bytes: 54604100 num_examples: 210 download_size: 1623582536 dataset_size: 1109892146.4559999 --- # ViExam: Are Vision Language Models Better than Humans on Vietnamese Multimodal Exam Questions?

by Vy Tuong Dang*, An Vo*, Quang Tau, Duc Dm, Daeyoung Kim,

*Equal contributionΒ 
KAIST

[![🌐 Project Page](https://img.shields.io/badge/🌐-Project_Page-blue)](https://vi-exam.github.io) [![arXiv](https://img.shields.io/badge/arXiv-2508.13680-b31b1b.svg)](https://arxiv.org/abs/2508.13680) [![πŸ€— Hugging Face](https://img.shields.io/badge/πŸ€—%20Hugging%20Face-Dataset-yellow.svg)](https://huggingface.co/datasets/anvo25/viexam) [![πŸ’» Code](https://img.shields.io/badge/πŸ’»-Code-black)](https://github.com/TuongVy20522176/ViExam) [![Code License](https://img.shields.io/badge/Code_License-MIT-green.svg)](LICENSE)
--- **TLDR:** State-of-the-art Vision Language Models (VLMs) demonstrate remarkable capabilities on English multimodal tasks but significantly underperform on Vietnamese educational assessments. ViExam reveals that SOTA VLMs achieve only 57.74% accuracy while open-source models achieve 27.70%, both falling short of average human performance (66.54%) on Vietnamese multimodal exam questions. ## Abstract Vision language models (VLMs) demonstrate remarkable capabilities on English multimodal tasks, but their performance on low-resource languages with genuinely multimodal educational content remains largely unexplored. In this work, we test how VLMs perform on Vietnamese educational assessments, investigating whether VLMs trained predominantly on English data can handle real-world cross-lingual multimodal reasoning. Our work presents the first comprehensive evaluation of VLM capabilities on multimodal Vietnamese exams through proposing ViExam, a benchmark containing 2,548 multimodal questions. We find that state-of-the-art VLMs achieve only 57.74% while open-source models achieve 27.70% mean accuracy across 7 academic domains, including Mathematics, Physics, Chemistry, Biology, Geography, Driving Test, and IQ Test. Most VLMs underperform average human test-takers (66.54%), with only the thinking VLM o3 (74.07%) exceeding human average performance, yet still falling substantially short of human best performance (99.60%). Cross-lingual prompting with English instructions while maintaining Vietnamese content fails to improve performance, decreasing accuracy by 1 percentage point for SOTA VLMs. Human-in-the-loop collaboration can partially improve VLM performance by 5 percentage points. Code and data are available at: https://github.com/TuongVy20522176/ViExam. ## Dataset Overview The ViExam dataset comprises 2,548 multimodal Vietnamese exam questions across **7 diverse domains**: Mathematics, Physics, Chemistry, Biology, Geography, Driving Test, and IQ Test. For each domain, we provide genuinely multimodal questions containing visual elements (charts, diagrams, illustrations, tables) integrated with Vietnamese text. The dataset includes tasks requiring complex multimodal reasoning, cultural knowledge understanding, and domain-specific terminology comprehension in Vietnamese educational contexts. ### Key Features - **Language**: Vietnamese (low-resource language with 100+ million speakers) - **Modality**: True multimodal (visual + textual integration) - **Domains**: 7 comprehensive academic and practical domains - **Question Types**: Multiple-choice (88%), multiple-answer (1%), variable options (11%) - **Difficulty**: Real Vietnamese exam standards requiring complex reasoning ### Domain Distribution | Domain | Questions | Description | |--------|-----------|-------------| | Mathematics | 456 | Mathematical equations, geometric diagrams, statistical charts | | Geography | 481 | Maps, geographical illustrations, statistical data | | Biology | 341 | Scientific diagrams, anatomical illustrations, experimental setups | | Physics | 361 | Physical phenomena diagrams, experimental apparatus, graphs | | Chemistry | 302 | Chemical structures, reaction diagrams, laboratory setups | | Driving Test | 367 | Traffic scenarios, road signs, situational judgment | | IQ Test | 240 | Pattern recognition, logical sequences, spatial reasoning | --- ## πŸ‘‹ Trying out our questions on your model Use these challenging Vietnamese multimodal exam questions where most tested models fail to answer correctly. Each question requires understanding both Vietnamese text and visual elements like diagrams, charts, and illustrations. --- ## πŸ§ͺ Dataset Overview | Subject | #Questions | |---|---| | Mathematics | 456 | | Physics | 361 | | Chemistry | 302 | | Biology | 341 | | Geography | 481 | | Driving Test | 367 | | IQ Test | 240 | | **Total** | **2,548** | > Each question is an image containing both Vietnamese text and visuals. Most are 4-option multiple-choice questions. No screenshots of text-only questions are included β€” all questions are genuinely **multimodal**. ## πŸš€ Quick Start Guide ### Option 1: Use Pre-built Dataset (Recommended for evaluating your models) **If you just want to evaluate VLMs on our Vietnamese exam questions:** πŸ”₯ **Download the complete dataset from Hugging Face** with full images and annotations: - Go to our [Hugging Face dataset](https://huggingface.co/datasets/anvo25/viexam) - Download ready-to-use Vietnamese multimodal exam questions This is the fastest way to get started with evaluation. ### Option 2: Reproduce/Generate Dataset **If you want to reproduce our data pipeline or create custom variations:** Please follow the installation and generation steps below to run the complete pipeline locally. --- ## πŸ’» Getting Started ```bash git clone https://github.com/TuongVy20522176/ViExam.git cd viexam pip install -r requirements.txt ``` ## πŸ“Š Tasks ViExam spans **7 distinct domains** representative of Vietnamese educational assessments: ### Academic Subjects (Tasks 1-5) - **Mathematics**: Function analysis, calculus, geometry (456 questions) - **Physics**: Mechanics, waves, thermodynamics (361 questions) - **Chemistry**: Organic chemistry, electrochemistry (302 questions) - **Biology**: Genetics, molecular biology (341 questions) - **Geography**: Data visualization, economic geography (481 questions) ### Practical Assessments (Tasks 6-7) - **[Driving Test](dataset/question_image/driving/)**: Traffic rules, road signs, safety scenarios (367 questions) - **[IQ Test](dataset/question_image/iq/)**: Pattern recognition, logical reasoning (240 questions) *All questions integrate Vietnamese text with visual elements (diagrams, charts, illustrations) at multiple resolutions.* --- ## πŸš€ Quickstart ### 1. Install requirements ```bash pip install -r requirements.txt ``` ### 2. Run evaluation on VLMs ```bash # Prepare evaluation batches python batch_api_code/main_batch_prepare.py \ --model claude-sonnet-4-20250514 \ --input-file dataset/metadata/full_vqa.json \ --prompt_language vn # Execute batch evaluation python batch_api_code/main_batch_api.py ``` Or for individual models: ```bash # Evaluate single model python api_code/main_api.py \ --model o3-2025-04-16 \ --prompt_language vn \ --input-file dataset/metadata/cropped_random_subset_vqa_description.json # Cross-lingual evaluation python api_code/main_api.py \ --model gpt-4.1-2025-04-14 \ --prompt_language en \ --input-file dataset/metadata/full_vqa.json ``` ### 3. Analyze results ```bash python src/result.py ``` --- ## ✏️ Human-in-the-loop Enhancement We provide web-based tools for: * **Question Selection**: `src/choose_question.html` - Filter and select questions * **OCR Verification**: `src/ocr_ground_truth.html` - Edit OCR results and descriptions * **Quality Control**: `src/check_question.html` - Manual verification interface --- ## πŸ—‚οΈ Repository Structure ``` viexam/ β”œβ”€β”€ api_code/ # Individual VLM evaluation β”‚ β”œβ”€β”€ api_handlers/ # API wrapper for VLMs β”‚ β”œβ”€β”€ main_api.py # Main API call logic β”‚ └── main_api_qwen.py # Qwen-specific evaluation β”‚ β”œβ”€β”€ batch_api_code/ # Batch processing for large-scale evaluation β”‚ β”œβ”€β”€ main_batch_prepare.py # Prepare evaluation batches β”‚ β”œβ”€β”€ main_batch_api.py # Execute batch evaluation β”‚ └── handlers/ # Batch processing utilities β”‚ β”œβ”€β”€ dataset/ β”‚ β”œβ”€β”€ question_image/ # Individual exam questions by domain β”‚ β”œβ”€β”€ metadata/ # Question annotations and ground truth β”‚ └── images/ # Dataset overview images β”‚ β”œβ”€β”€ src/ # Full pipeline for data extraction β”‚ β”œβ”€β”€ cut_question.py # Question boundary detection β”‚ β”œβ”€β”€ convert_pdf_to_image.py # PDF β†’ PNG conversion β”‚ β”œβ”€β”€ check_question.html # Manual verification interface β”‚ β”œβ”€β”€ choose_question.html # Question selection tool β”‚ β”œβ”€β”€ ocr_ground_truth.html # OCR verification tool β”‚ └── result.py # Accuracy analysis β”‚ └── api_key/ # API credentials (not tracked) β”œβ”€β”€ claude_key.txt β”œβ”€β”€ openai_key.txt └── ... ``` --- ## πŸ“ˆ Key Findings Our evaluation reveals several important insights: 1. **Strong OCR Performance:** VLMs achieve strong OCR performance on Vietnamese text (6% CER and 9% WER), confirming that poor performance stems from multimodal reasoning challenges rather than basic text recognition failures 2. **Performance Gap:** SOTA VLMs achieve only 57% mean accuracy across 7 domains, with Geography most accessible (72%) and Physics most challenging (44%) 3. **Thinking Models Excel:** The thinking VLM o3 substantially outperforms non-thinking VLMs (74% vs. 48-59%) 4. **Option B Bias:** VLMs exhibit significant bias toward option B (31%) in multiple-choice questions, suggesting failures are not purely due to reasoning limitations but may be partially attributable to training data bias 5. **Multimodal Challenge:** VLMs perform better on text-only questions (70%) versus multimodal questions (61%), confirming that multimodal integration poses fundamental challenges 6. **Open-source Gap:** Open-source VLMs achieve substantially lower performance than closed-source/SOTA VLMs (27.7% vs. 57%) 7. **Cross-lingual Mixed Results:** Cross-lingual prompting shows mixed results - improving open-source VLMs (+2.9%) while hurting SOTA VLMs (-1.0%) 8. **Human-AI Collaboration:** Human-in-the-loop collaboration provides modest gains with OCR help (+0.48%) but substantial improvement with full text and image editing (+5.71%) --- ## Citation If you find our dataset or model useful for your research and applications, please cite using this BibTeX: ```bibtex @article{dang2025viexam, title={ViExam: Are Vision Language Models Better than Humans on Vietnamese Multimodal Exam Questions?}, author={Dang, Vy Tuong and Vo, An and Tau, Quang and Dm, Duc and Kim, Daeyoung}, journal={arXiv preprint arXiv:2508.13680}, year={2025} } ```