Add dataset card
Browse files
README.md
CHANGED
@@ -27,6 +27,12 @@ dataset_info:
|
|
27 |
dtype: int32
|
28 |
- name: question_type
|
29 |
dtype: string
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
- name: option_a
|
31 |
dtype: string
|
32 |
- name: option_b
|
@@ -39,65 +45,59 @@ dataset_info:
|
|
39 |
dtype: string
|
40 |
- name: correct_idx
|
41 |
dtype: int32
|
42 |
-
- name: notation
|
43 |
-
dtype: string
|
44 |
-
- name: notation_type
|
45 |
-
dtype: string
|
46 |
-
- name: question
|
47 |
-
dtype: string
|
48 |
- name: image
|
49 |
dtype: image
|
50 |
splits:
|
51 |
- name: fork
|
52 |
-
num_bytes:
|
53 |
num_examples: 200
|
54 |
- name: legal
|
55 |
-
num_bytes:
|
56 |
num_examples: 200
|
57 |
- name: puzzle
|
58 |
-
num_bytes:
|
59 |
num_examples: 200
|
60 |
- name: eval
|
61 |
-
num_bytes:
|
62 |
num_examples: 200
|
63 |
- name: carbon
|
64 |
-
num_bytes:
|
65 |
num_examples: 200
|
66 |
- name: hydrogen
|
67 |
-
num_bytes:
|
68 |
num_examples: 200
|
69 |
- name: weight
|
70 |
-
num_bytes:
|
71 |
num_examples: 200
|
72 |
- name: caption
|
73 |
-
num_bytes:
|
74 |
num_examples: 200
|
75 |
- name: notes
|
76 |
-
num_bytes:
|
77 |
num_examples: 200
|
78 |
- name: measures
|
79 |
-
num_bytes:
|
80 |
num_examples: 200
|
81 |
- name: forms
|
82 |
-
num_bytes:
|
83 |
num_examples: 200
|
84 |
- name: rhythm
|
85 |
-
num_bytes:
|
86 |
num_examples: 200
|
87 |
- name: path_counting
|
88 |
-
num_bytes:
|
89 |
num_examples: 200
|
90 |
- name: path_existence
|
91 |
-
num_bytes:
|
92 |
num_examples: 200
|
93 |
- name: shortest_path
|
94 |
-
num_bytes:
|
95 |
num_examples: 200
|
96 |
- name: bfs_traversal
|
97 |
-
num_bytes:
|
98 |
num_examples: 200
|
99 |
-
download_size:
|
100 |
-
dataset_size:
|
101 |
configs:
|
102 |
- config_name: default
|
103 |
data_files:
|
@@ -155,8 +155,8 @@ Evaluating whether vision–language models (VLMs) reason consistently across re
|
|
155 |
- **4 Domains**: Chess, Chemistry, Music, Graph Theory with standardized notations
|
156 |
- **16 Tasks**: 4 tasks per domain (64 total task-modality combinations)
|
157 |
- **3 Modalities**: Language-only (L), Vision-only (V), Vision-Language (VL)
|
158 |
-
- **3,200
|
159 |
-
- **9,600 Evaluations**:
|
160 |
- **Semantic Equivalence**: Same information presented in different representational formats
|
161 |
|
162 |
## Domains and Notation Systems
|
@@ -190,7 +190,7 @@ The dataset is organized into 16 task-based splits (600 samples each):
|
|
190 |
- **Music**: `notes`, `measures`, `forms`, `rhythm`
|
191 |
- **Graph Theory**: `path_counting`, `path_existence`, `shortest_path`, `bfs_traversal`
|
192 |
|
193 |
-
Each split contains
|
194 |
|
195 |
## Usage
|
196 |
|
@@ -204,16 +204,15 @@ dataset = load_dataset("lilvjosephtang/SEAM-Benchmark")
|
|
204 |
chess_fork = dataset["fork"] # Chess fork detection (600 samples)
|
205 |
chemistry_carbon = dataset["carbon"] # Carbon atom counting (600 samples)
|
206 |
|
207 |
-
#
|
208 |
-
|
209 |
-
|
210 |
-
vision_language = chess_fork.filter(lambda x: x["modality"] == "Vision-Language")
|
211 |
|
212 |
# Example sample structure
|
213 |
sample = chess_fork[0]
|
214 |
print(f"Task: {sample['task']}")
|
215 |
print(f"Domain: {sample['domain']}")
|
216 |
-
|
217 |
print(f"Question: {sample['question']}")
|
218 |
print(f"Options: A) {sample['option_a']}, B) {sample['option_b']}, C) {sample['option_c']}, D) {sample['option_d']}")
|
219 |
print(f"Correct Answer: {sample['correct_answer']}")
|
@@ -226,7 +225,7 @@ print(f"Notation: {sample['notation']}") # FEN string for chess
|
|
226 |
Each sample contains:
|
227 |
- `task`: Task identifier (e.g., "fork", "carbon")
|
228 |
- `domain`: Domain category ("chess", "chemistry", "music", "graph")
|
229 |
-
-
|
230 |
- `index`: Sample index within the task
|
231 |
- `question`: Question text (if applicable)
|
232 |
- `notation`: Domain-specific notation (FEN, SMILES, ABC, adjacency matrix)
|
@@ -234,7 +233,7 @@ Each sample contains:
|
|
234 |
- `option_a`, `option_b`, `option_c`, `option_d`: Multiple choice options
|
235 |
- `correct_answer`: The correct answer
|
236 |
- `correct_idx`: Index of the correct option
|
237 |
-
- `image`: Associated image (PIL Image for
|
238 |
|
239 |
## Evaluation Protocol
|
240 |
|
|
|
27 |
dtype: int32
|
28 |
- name: question_type
|
29 |
dtype: string
|
30 |
+
- name: question
|
31 |
+
dtype: string
|
32 |
+
- name: notation
|
33 |
+
dtype: string
|
34 |
+
- name: notation_type
|
35 |
+
dtype: string
|
36 |
- name: option_a
|
37 |
dtype: string
|
38 |
- name: option_b
|
|
|
45 |
dtype: string
|
46 |
- name: correct_idx
|
47 |
dtype: int32
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
- name: image
|
49 |
dtype: image
|
50 |
splits:
|
51 |
- name: fork
|
52 |
+
num_bytes: 0
|
53 |
num_examples: 200
|
54 |
- name: legal
|
55 |
+
num_bytes: 0
|
56 |
num_examples: 200
|
57 |
- name: puzzle
|
58 |
+
num_bytes: 0
|
59 |
num_examples: 200
|
60 |
- name: eval
|
61 |
+
num_bytes: 0
|
62 |
num_examples: 200
|
63 |
- name: carbon
|
64 |
+
num_bytes: 0
|
65 |
num_examples: 200
|
66 |
- name: hydrogen
|
67 |
+
num_bytes: 0
|
68 |
num_examples: 200
|
69 |
- name: weight
|
70 |
+
num_bytes: 0
|
71 |
num_examples: 200
|
72 |
- name: caption
|
73 |
+
num_bytes: 0
|
74 |
num_examples: 200
|
75 |
- name: notes
|
76 |
+
num_bytes: 0
|
77 |
num_examples: 200
|
78 |
- name: measures
|
79 |
+
num_bytes: 0
|
80 |
num_examples: 200
|
81 |
- name: forms
|
82 |
+
num_bytes: 0
|
83 |
num_examples: 200
|
84 |
- name: rhythm
|
85 |
+
num_bytes: 0
|
86 |
num_examples: 200
|
87 |
- name: path_counting
|
88 |
+
num_bytes: 0
|
89 |
num_examples: 200
|
90 |
- name: path_existence
|
91 |
+
num_bytes: 0
|
92 |
num_examples: 200
|
93 |
- name: shortest_path
|
94 |
+
num_bytes: 0
|
95 |
num_examples: 200
|
96 |
- name: bfs_traversal
|
97 |
+
num_bytes: 0
|
98 |
num_examples: 200
|
99 |
+
download_size: 0
|
100 |
+
dataset_size: 0
|
101 |
configs:
|
102 |
- config_name: default
|
103 |
data_files:
|
|
|
155 |
- **4 Domains**: Chess, Chemistry, Music, Graph Theory with standardized notations
|
156 |
- **16 Tasks**: 4 tasks per domain (64 total task-modality combinations)
|
157 |
- **3 Modalities**: Language-only (L), Vision-only (V), Vision-Language (VL)
|
158 |
+
- **3,200 Base Samples**: 200 samples × 16 tasks
|
159 |
+
- **9,600 Evaluations**: TaskLoader generates 3 modality-specific prompts per base sample
|
160 |
- **Semantic Equivalence**: Same information presented in different representational formats
|
161 |
|
162 |
## Domains and Notation Systems
|
|
|
190 |
- **Music**: `notes`, `measures`, `forms`, `rhythm`
|
191 |
- **Graph Theory**: `path_counting`, `path_existence`, `shortest_path`, `bfs_traversal`
|
192 |
|
193 |
+
Each split contains 200 base samples. TaskLoader generates modality-specific prompts (L, V, VL) from these base samples.
|
194 |
|
195 |
## Usage
|
196 |
|
|
|
204 |
chess_fork = dataset["fork"] # Chess fork detection (600 samples)
|
205 |
chemistry_carbon = dataset["carbon"] # Carbon atom counting (600 samples)
|
206 |
|
207 |
+
# Each task contains 200 base samples
|
208 |
+
# TaskLoader generates modality-specific prompts (L/V/VL) from these base samples
|
209 |
+
print(f"Task {chess_fork[0]['task']} has {len(chess_fork)} base samples")
|
|
|
210 |
|
211 |
# Example sample structure
|
212 |
sample = chess_fork[0]
|
213 |
print(f"Task: {sample['task']}")
|
214 |
print(f"Domain: {sample['domain']}")
|
215 |
+
# No modality field - TaskLoader handles modality generation
|
216 |
print(f"Question: {sample['question']}")
|
217 |
print(f"Options: A) {sample['option_a']}, B) {sample['option_b']}, C) {sample['option_c']}, D) {sample['option_d']}")
|
218 |
print(f"Correct Answer: {sample['correct_answer']}")
|
|
|
225 |
Each sample contains:
|
226 |
- `task`: Task identifier (e.g., "fork", "carbon")
|
227 |
- `domain`: Domain category ("chess", "chemistry", "music", "graph")
|
228 |
+
- No modality field (TaskLoader generates modality-specific prompts)
|
229 |
- `index`: Sample index within the task
|
230 |
- `question`: Question text (if applicable)
|
231 |
- `notation`: Domain-specific notation (FEN, SMILES, ABC, adjacency matrix)
|
|
|
233 |
- `option_a`, `option_b`, `option_c`, `option_d`: Multiple choice options
|
234 |
- `correct_answer`: The correct answer
|
235 |
- `correct_idx`: Index of the correct option
|
236 |
+
- `image`: Associated image (PIL Image, None for base storage - TaskLoader handles image loading for V/VL modalities)
|
237 |
|
238 |
## Evaluation Protocol
|
239 |
|