--- license: cc-by-sa-4.0 task_categories: - question-answering - multiple-choice language: - ja configs: - config_name: v1.0 data_files: - split: test path: v1.0/test-* - split: dev path: v1.0/dev-* dataset_info: config_name: v1.0 features: - name: qid dtype: string - name: category dtype: string - name: question dtype: string - name: choice0 dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: choice3 dtype: string - name: answer_index dtype: int64 splits: - name: dev num_bytes: 7089 num_examples: 32 - name: test num_bytes: 515785 num_examples: 2309 download_size: 886472 dataset_size: 522874 --- # Dataset Card for JamC-QA English/[Japanese](README_ja.md) ## Dataset Summary This benchmark evaluates knowledge specific to Japan through multiple-choice questions. It covers eight categories: culture, custom, regional_identity, geography, history, government, law, and healthcare. Achieving high performance requires broad and detailed understanding of Japan across these categories. ## Leaderboard ### Evaluation Metric **Accuracy** In this multiple-choice question answering task, the LLM outputs the option string rather than the option label, and accuracy is calculated as the proportion of questions whose output exactly matches the gold correct option string. | Model | Micro-average | culture | custom | regional_identity | geography | history | government | law | healthcare | |:---|---:|---:|---:|---:|---:|---:|---:|---:|---:| | [sarashina2-8x70b](https://huggingface.co/sbintuitions/sarashina2-8x70b) | **0.7254** | 0.7141 | **0.7750** | **0.7607** | 0.6544 | **0.7843** | 0.7364 | 0.6321 | **0.9167** | | [sarashina2-70b](https://huggingface.co/sbintuitions/sarashina2-70b) | 0.7246 | **0.7188** | 0.7450 | 0.7355 | **0.6728** | 0.7638 | 0.7636 | 0.6656 | **0.9167** | | [Llama-3.3-Swallow-70B-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-v0.4) | 0.6973 | 0.6891 | **0.7750** | 0.5894 | 0.5662 | 0.7755 | **0.7727** | **0.7826** | 0.8542 | | [RakutenAI-2.0-8x7B](https://huggingface.co/Rakuten/RakutenAI-2.0-8x7B) | 0.6327 | 0.6219 | 0.7250 | 0.6171 | 0.5110 | 0.7143 | 0.7091 | 0.5753 | 0.8125 | | [plamo-100b](https://huggingface.co/pfnet/plamo-100b) | 0.6033 | 0.6016 | 0.6500 | 0.6373 | 0.5037 | 0.6822 | 0.6091 | 0.5151 | 0.6875 | | [Mixtral-8x7B-v0.1-japanese](https://huggingface.co/abeja/Mixtral-8x7B-v0.1-japanese) | 0.5929 | 0.6016 | 0.6700 | 0.5793 | 0.4926 | 0.6122 | 0.7364 | 0.5452 | 0.6667 | | [Meta-Llama-3.1-405B](https://huggingface.co/meta-llama/Llama-3.1-405B) | 0.5712 | 0.5578 | 0.5450 | 0.4836 | 0.5000 | 0.6793 | 0.6455 | 0.6288 | 0.6875 | | [llm-jp-3.1-8x13b](https://huggingface.co/llm-jp/llm-jp-3-8x13b) | 0.5682 | 0.5953 | 0.6350 | 0.5819 | 0.4485 | 0.5889 | 0.6273 | 0.5017 | 0.6250 | | [Nemotron-4-340B-Base](https://huggingface.co/mgoin/Nemotron-4-340B-Base-hf) | 0.5673 | 0.5734 | 0.6150 | 0.5113 | 0.4669 | 0.5948 | 0.7273 | 0.5819 | 0.6667 | | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | 0.5271 | 0.5219 | 0.5950 | 0.4257 | 0.4375 | 0.6064 | 0.6091 | 0.5619 | 0.6875 | ## Language Japanese ## Dataset Structure ### Data Instances An example from culture category looks as follows: ``` { "qid": "jamcqa-test-culture-00001", "category": "culture", "question": "「狂った世で気が狂うなら気は確かだ」の名言を残した映画はどれ?", "choice0": "影武者", "choice1": "羅生門", "choice2": "隠し砦の三悪人", "choice3": "乱", "answer_index": 3, } ``` ## Data Fields - `qid (str)`: A unique identifier for each question. - `category (str)`: The category of the question. - culture, custom, regional_identity, geography, history, government, law, and healthcare - `question (str)`: The question text. - Converted from full-width to half-width characters, excluding katakana characters. - Does not contain any line breaks (`\n`). - Leading and trailing whitespace is removed. - `choice{0..3} (str)`: Four answer options (`choice0` to `choice3`). - Converted from full-width to half-width characters, excluding katakana characters. - Does not contain any line breaks (`\n`). - Leading and trailing whitespace is removed. - `answer_index (int)`: The index of the correct answer among `choice0` to `choice3` (0–3). ## Data Splits - `dev`: 4 examples per category, intended for few-shot evaluation - `test`: 2,309 examples in total Number of Examples: | Category | dev | test | | --- | ---: | ---: | | culture | 4 | 640 | | custom | 4 | 200 | | regional_identity | 4 | 397 | | geography | 4 | 272 | | history | 4 | 343 | | government | 4 | 110 | | law | 4 | 299 | | healthcare | 4 | 48 | | total | 32 | 2,309 | # Licensing Information - [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/) # How to use ```python $ python >>> import datasets >>> jamcqa = datasets.load_dataset('sbintuitions/JamC-QA', 'v1.0') >>> print(jamcqa) DatasetDict({ test: Dataset({ features: ['qid', 'category', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'answer_index'], num_rows: 2309 }) dev: Dataset({ features: ['qid', 'category', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'answer_index'], num_rows: 32 }) }) >>> jamcqa_test = jamcqa['test'] >>> print(jamcqa_test) Dataset({ features: ['qid', 'category', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'answer_index'], num_rows: 2309 }) >>> print(jamcqa_test[0]) {'qid': 'jamcqa-test-culture-00001', 'category': 'culture', 'question': '「狂った世で気が狂うなら気は確かだ」の名言を残した映画はどれ?', 'choice0': '影武者', 'choice1': '羅生門', 'choice2': '隠し砦の三悪人', 'choice3': '乱', 'answer_index': 3} >>> ``` # Citation Information ``` @inproceedings{Oka2025, author={岡 照晃, 柴田 知秀, 吉田 奈央}, title={JamC-QA: 日本固有の知識を問う多肢選択式質問応答ベンチマークの構築}, year={2025}, month={March}, booktitle={言語処理学会第31回年次大会(NLP2025)}, pages={839--844}, } ```