File size: 7,108 Bytes
a225d5d ed8e096 44456d3 ed8e096 855fe8d 84c337c 5f61b53 84c337c 8da6ea3 28bf988 0d0b99d 28bf988 8da6ea3 ee2dd35 43bf01a 76513cc aa87ce3 c0c8236 456b7b7 7d3c0b4 26e3cc3 6e0284f 646c9b5 1ce272d 03f91f1 a0d8926 518b679 af2d8a8 a0d8926 6ea472a 7d3c0b4 456b7b7 28bf988 c455c86 28bf988 5097250 4dbfe4e c3e5d90 497b2cd d5f5708 c3e5d90 d5f5708 c3e5d90 4dbfe4e 28bf988 b6519b6 4dbfe4e 43bf01a 4dbfe4e ee2dd35 ad9bbd8 ee2dd35 4dbfe4e c455c86 399a94f a369e25 f60b4a8 de18014 f60b4a8 a369e25 43bf01a a369e25 71dca28 28bf988 1ce272d 1fc0cf7 1ce272d 1854c22 1fc0cf7 c3e5d90 1fc0cf7 c3e5d90 1fc0cf7 c3e5d90 1fc0cf7 1ce272d 1854c22 48a8640 03f91f1 1854c22 7c81e16 1854c22 020e942 11666cd 1854c22 1ce272d 1854c22 1ce272d 7c81e16 3dcfd15 7c81e16 3dcfd15 7c81e16 28bf988 c0c8236 4dbfe4e c0c8236 4dbfe4e 28bf988 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 |
---
license: cc-by-sa-4.0
task_categories:
- question-answering
- multiple-choice
language:
- ja
configs:
- config_name: v1.0
data_files:
- split: test
path: v1.0/test-*
- split: dev
path: v1.0/dev-*
dataset_info:
config_name: v1.0
features:
- name: qid
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: choice0
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: choice3
dtype: string
- name: answer_index
dtype: int64
splits:
- name: dev
num_bytes: 7089
num_examples: 32
- name: test
num_bytes: 515785
num_examples: 2309
download_size: 1174968
dataset_size: 522874
---
# Dataset Card for JamC-QA
English/[Japanese](README_ja.md)
## Dataset Summary
This benchmark evaluates knowledge specific to Japan through multiple-choice questions.
It covers eight categories: culture, custom, regional_identity, geography, history, government, law, and healthcare.
Achieving high performance requires broad and detailed understanding of Japan across these categories.
## Leaderboard
### Evaluation Metric
In our evaluation,
the LLM outputs the option string rather than the option label,
and the following table shows the proportion of outputs that exactly match the gold option string.
| Model | All | culture | custom | regional_identity | geography | history | government | law | healthcare |
|:---|----|---:|---:|---:|---:|---:|---:|---:|---:|
| [sarashina2-8x70b](https://huggingface.co/sbintuitions/sarashina2-8x70b) | **0.725** | 0.714 | **0.775** | **0.761** | 0.654 | **0.784** | 0.736 | 0.632 | **0.917** |
| [sarashina2-70b](https://huggingface.co/sbintuitions/sarashina2-70b) | **0.725** | **0.719** | 0.745 | 0.736 | **0.673** | 0.764 | 0.764 | 0.666 | **0.917** |
| [Llama-3.3-Swallow-70B-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-v0.4) | 0.697 | 0.689 | **0.775** | 0.589 | 0.566 | 0.776 | **0.773** | **0.783** | 0.854 |
| [RakutenAI-2.0-8x7B](https://huggingface.co/Rakuten/RakutenAI-2.0-8x7B) | 0.633 | 0.622 | 0.725 | 0.617 | 0.511 | 0.714 | 0.709 | 0.575 | 0.813 |
| [plamo-100b](https://huggingface.co/pfnet/plamo-100b) | 0.603 | 0.602 | 0.650 | 0.637 | 0.504 | 0.682 | 0.609 | 0.515 | 0.688 |
| [Mixtral-8x7B-v0.1-japanese](https://huggingface.co/abeja/Mixtral-8x7B-v0.1-japanese) | 0.593 | 0.602 | 0.670 | 0.579 | 0.493 | 0.612 | 0.736 | 0.545 | 0.667 |
| [Meta-Llama-3.1-405B](https://huggingface.co/meta-llama/Llama-3.1-405B) | 0.571 | 0.558 | 0.545 | 0.484 | 0.500 | 0.679 | 0.646 | 0.629 | 0.688 |
| [llm-jp-3.1-8x13b](https://huggingface.co/llm-jp/llm-jp-3-8x13b) | 0.568 | 0.595 | 0.635 | 0.582 | 0.449 | 0.589 | 0.627 | 0.502 | 0.625 |
| [Nemotron-4-340B-Base](https://huggingface.co/mgoin/Nemotron-4-340B-Base-hf) | 0.567 | 0.573 | 0.615 | 0.511 | 0.467 | 0.595 | 0.727 | 0.582 | 0.667 |
| [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | 0.527 | 0.522 | 0.595 | 0.426 | 0.438 | 0.606 | 0.609 | 0.562 | 0.688 |
## Language
Japanese
## Dataset Structure
### Data Instances
An example from culture category looks as follows:
```
{
"qid": "jamcqa-test-culture-00001",
"category": "culture",
"question": "「狂った世で気が狂うなら気は確かだ」の名言を残した映画はどれ?",
"choice0": "影武者",
"choice1": "羅生門",
"choice2": "隠し砦の三悪人",
"choice3": "乱",
"answer_index": 3,
}
```
## Data Fields
- `qid (str)`: A unique identifier for each question.
- `category (str)`: The category of the question.
- culture, custom, regional_identity, geography, history, government, law, and healthcare
- `question (str)`: The question text.
- Converted from full-width to half-width characters, excluding katakana characters.
- Does not contain any line breaks (`\n`).
- Leading and trailing whitespace is removed.
- `choice{0..3} (str)`: Four answer options (`choice0` to `choice3`).
- Converted from full-width to half-width characters, excluding katakana characters.
- Does not contain any line breaks (`\n`).
- Leading and trailing whitespace is removed.
- `answer_index (int)`: The index of the correct answer among `choice0` to `choice3` (0–3).
## Data Splits
- `dev`: 4 examples per category, intended for few-shot evaluation
- `test`: 2,309 examples in total
Number of Examples:
| Category | dev | test |
| --- | ---: | ---: |
| culture | 4 | 640 |
| custom | 4 | 200 |
| regional_identity | 4 | 397 |
| geography | 4 | 272 |
| history | 4 | 343 |
| government | 4 | 110 |
| law | 4 | 299 |
| healthcare | 4 | 48 |
| total | 32 | 2,309 |
# Licensing Information
- [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)
# Usage
## Dataset Loading
```python
$ python
>>> import datasets
>>> jamcqa = datasets.load_dataset('sbintuitions/JamC-QA', 'v1.0')
>>> print(jamcqa)
DatasetDict({
test: Dataset({
features: ['qid', 'category', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'answer_index'],
num_rows: 2309
})
dev: Dataset({
features: ['qid', 'category', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'answer_index'],
num_rows: 32
})
})
>>> jamcqa_test = jamcqa['test']
>>> print(jamcqa_test)
Dataset({
features: ['qid', 'category', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'answer_index'],
num_rows: 2309
})
>>> print(jamcqa_test[0])
{'qid': 'jamcqa-test-culture-00001', 'category': 'culture', 'question': '「狂った世で気が狂うなら気は確かだ」の名言を残した映画はどれ?', 'choice0': '影武者', 'choice1': '羅生門', 'choice2': '隠し砦の三悪人', 'choice3': '乱', 'answer_index': 3}
>>>
```
## Evaluation with FlexEval
You can easily use [FlexEval](https://github.com/sbintuitions/flexeval) (version 0.13.3 or later)
to evaluate the JamC-QA score by simply replacing `commonsense_qa` with `jamcqa` in the
[Quickstart](https://github.com/sbintuitions/flexeval?tab=readme-ov-file#quick-start) guide.
### Run Command
```python
flexeval_lm \
--language_model HuggingFaceLM \
--language_model.model "sbintuitions/sarashina2.2-0.5b" \
--language_model.default_gen_kwargs "{ do_sample: false }" \
--eval_setup "jamcqa" \
--save_dir "results/jamcqa"
```
`--language_model.default_gen_kwargs "{ do_sample: false }"` disables sampling and performs
[greedy search](https://huggingface.co/docs/transformers/generation_strategies#greedy-search).
### Output
```
...
2025-09-03 15:48:24.633 | INFO | flexeval.core.evaluate_generation:evaluate_generation:92 - {'exact_match': 0.2368990905153746, 'finish_reason_ratio-stop': 1.0, 'avg_output_length': 6.94283239497618, 'max_output_length': 93, 'min_output_length': 2}
...
```
# Citation Information
```
@inproceedings{Oka2025,
author={岡 照晃, 柴田 知秀, 吉田 奈央},
title={JamC-QA: 日本固有の知識を問う多肢選択式質問応答ベンチマークの構築},
year={2025},
month={March},
booktitle={言語処理学会第31回年次大会(NLP2025)},
pages={839--844},
}
```
|