File size: 4,550 Bytes
72b1d25 ffbe164 72b1d25 ffbe164 72b1d25 ffbe164 72b1d25 1e5161a 8b63ca3 1e5161a 8b63ca3 1e5161a 8b63ca3 1e5161a b6e948f 8b63ca3 b6e948f 8b63ca3 b6e948f 1e5161a ce76c14 ab2d20f ce76c14 ffbe164 ab2d20f ffbe164 ce76c14 5af843e ce76c14 5af843e 4ead72b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
---
language:
- en
- hi
license: cc-by-nc-sa-4.0
size_categories:
- 1K<n<10K
task_categories:
- table-question-answering
- visual-question-answering
- image-text-to-text
tags:
- cricket
configs:
- config_name: default
data_files:
- split: test_single
path: data/test_single-*
- split: test_multi
path: data/test_multi-*
dataset_info:
features:
- name: id
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: subset
dtype: string
splits:
- name: test_single
num_bytes: 976385438.0
num_examples: 2000
- name: test_multi
num_bytes: 904538778.0
num_examples: 997
download_size: 1573738795
dataset_size: 1880924216.0
---
# MMCricBench 🏏
**Multimodal Cricket Scorecard Benchmark for VQA**
This repository contains the dataset for the paper [Mind the (Language) Gap: Towards Probing Numerical and Cross-Lingual Limits of LVLMs](https://huggingface.co/papers/2508.17334).
MMCricBench evaluates **Large Vision-Language Models (LVLMs)** on **numerical reasoning**, **cross-lingual understanding**, and **multi-image reasoning** over semi-structured cricket scorecard images. It includes English and Hindi scorecards; all questions/answers are in English.
---
## Overview
- **Images:** 1,463 synthetic scorecards (PNG)
- 822 single-image scorecards
- 641 multi-image scorecards
- **QA pairs:** 1,500 (English)
- **Reasoning categories:**
- **C1** – Direct retrieval & simple inference
- **C2** – Basic arithmetic & conditional logic
- **C3** – Multi-step quantitative reasoning (often across images)
---
## Files / Splits
We provide two evaluation splits:
- `test_single` — single-image questions
- `test_multi` — multi-image questions
> If you keep a single JSONL (e.g., `test_all.jsonl`), use a **list** for `images` in every row. Single-image rows should have a one-element list. On the Hub, we expose two test splits.
---
## Data Schema
Each row is a JSON object:
| Field | Type | Description |
|------------|---------------------|----------------------------------------------|
| `id` | `string` | Unique identifier |
| `images` | `list[string]` | Paths to one or more scorecard images |
| `question` | `string` | Question text (English) |
| `answer` | `string` | Ground-truth answer (canonicalized) |
| `category` | `string` (`C1/C2/C3`)| Reasoning category |
| `subset`* | `string` (`single/multi`) | Optional convenience field |
**Example (single-image):**
```json
{"id":"english-single-9","images":["English-apr/single_image/1198246_2innings_with_color1.png"],"question":"Which bowler has conceded the most extras?","answer":"Wahab Riaz","category":"C2","subset":"single"}
```
## Loading & Preview
### Load from the Hub (two-split layout)
```python
from datasets import load_dataset
# Loads: DatasetDict({'test_single': ..., 'test_multi': ...})
ds = load_dataset("DIALab/MMCricBench")
print(ds)
# Peek a single-image example
ex = ds["test_single"][0]
print(ex["id"])
print(ex["question"], "->", ex["answer"])
# Preview images (each example stores a list of PIL images)
from IPython.display import display
for img in ex["images"]:
display(img)
```
## Baseline Results (from the paper)
Accuracy (%) on MMCricBench by split and language.
| Model | #Params | Single-EN (Avg) | Single-HI (Avg) | Multi-EN (Avg) | Multi-HI (Avg) |
|-------------------|:------:|:---------------:|:---------------:|:--------------:|:--------------:|
| SmolVLM | 500M | 19.2 | 19.0 | 11.8 | 11.6 |
| Qwen2.5VL | 3B | 40.2 | 33.3 | 31.2 | 22.0 |
| LLaVA-NeXT | 7B | 28.3 | 26.6 | 16.2 | 14.8 |
| mPLUG-DocOwl2 | 8B | 20.7 | 19.9 | 15.2 | 14.4 |
| Qwen2.5VL | 7B | 49.1 | 42.6 | 37.0 | 32.2 |
| InternVL-2 | 8B | 29.4 | 23.4 | 18.6 | 18.2 |
| Llama-3.2-V | 11B | 27.3 | 24.8 | 26.2 | 20.4 |
| **GPT-4o** | — | **57.3** | **45.1** | **50.6** | **43.6** |
*Numbers are exact-match accuracy (higher is better). For C1/C2/C3 breakdowns, see Table 3 (single-image) and Table 5 (multi-image) in the paper.*
## Contact
For questions or issues, please open a discussion on the dataset page or email **Abhirama Subramanyam** at [email protected] |