JamC-QA / README.md
teruo6939's picture
Update README.md
6e0284f verified
metadata
license: cc-by-sa-4.0
task_categories:
  - question-answering
  - multiple-choice
language:
  - ja
configs:
  - config_name: v1.0
    data_files:
      - split: test
        path: v1.0/test-*
      - split: dev
        path: v1.0/dev-*
dataset_info:
  config_name: v1.0
  features:
    - name: qid
      dtype: string
    - name: category
      dtype: string
    - name: question
      dtype: string
    - name: choice0
      dtype: string
    - name: choice1
      dtype: string
    - name: choice2
      dtype: string
    - name: choice3
      dtype: string
    - name: answer_index
      dtype: int64
  splits:
    - name: dev
      num_bytes: 7089
      num_examples: 32
    - name: test
      num_bytes: 515785
      num_examples: 2309
  download_size: 1174968
  dataset_size: 522874

Dataset Card for JamC-QA

English/Japanese

Dataset Summary

This benchmark evaluates knowledge specific to Japan through multiple-choice questions. It covers eight categories: culture, custom, regional_identity, geography, history, government, law, and healthcare. Achieving high performance requires broad and detailed understanding of Japan across these categories.

Leaderboard

Evaluation Metric

In our evaluation, the LLM outputs the option string rather than the option label, and the following table shows the proportion of outputs that exactly match the gold option string.

Model All culture custom regional_identity geography history government law healthcare
sarashina2-8x70b 0.725 0.714 0.775 0.761 0.654 0.784 0.736 0.632 0.917
sarashina2-70b 0.725 0.719 0.745 0.736 0.673 0.764 0.764 0.666 0.917
Llama-3.3-Swallow-70B-v0.4 0.697 0.689 0.775 0.589 0.566 0.776 0.773 0.783 0.854
RakutenAI-2.0-8x7B 0.633 0.622 0.725 0.617 0.511 0.714 0.709 0.575 0.813
plamo-100b 0.603 0.602 0.650 0.637 0.504 0.682 0.609 0.515 0.688
Mixtral-8x7B-v0.1-japanese 0.593 0.602 0.670 0.579 0.493 0.612 0.736 0.545 0.667
Meta-Llama-3.1-405B 0.571 0.558 0.545 0.484 0.500 0.679 0.646 0.629 0.688
llm-jp-3.1-8x13b 0.568 0.595 0.635 0.582 0.449 0.589 0.627 0.502 0.625
Nemotron-4-340B-Base 0.567 0.573 0.615 0.511 0.467 0.595 0.727 0.582 0.667
Qwen2.5-72B 0.527 0.522 0.595 0.426 0.438 0.606 0.609 0.562 0.688

Language

Japanese

Dataset Structure

Data Instances

An example from culture category looks as follows:

{
  "qid": "jamcqa-test-culture-00001",
  "category": "culture",
  "question": "「狂った世で気が狂うなら気は確かだ」の名言を残した映画はどれ?",
  "choice0": "影武者",
  "choice1": "羅生門",
  "choice2": "隠し砦の三悪人",
  "choice3": "乱",
  "answer_index": 3,
}

Data Fields

  • qid (str): A unique identifier for each question.
  • category (str): The category of the question.
    • culture, custom, regional_identity, geography, history, government, law, and healthcare
  • question (str): The question text.
    • Converted from full-width to half-width characters, excluding katakana characters.
    • Does not contain any line breaks (\n).
    • Leading and trailing whitespace is removed.
  • choice{0..3} (str): Four answer options (choice0 to choice3).
    • Converted from full-width to half-width characters, excluding katakana characters.
    • Does not contain any line breaks (\n).
    • Leading and trailing whitespace is removed.
  • answer_index (int): The index of the correct answer among choice0 to choice3 (0–3).

Data Splits

  • dev: 4 examples per category, intended for few-shot evaluation
  • test: 2,309 examples in total

Number of Examples:

Category dev test
culture 4 640
custom 4 200
regional_identity 4 397
geography 4 272
history 4 343
government 4 110
law 4 299
healthcare 4 48
total 32 2,309

Licensing Information

Usage

Dataset Loading

$ python
>>> import datasets
>>> jamcqa = datasets.load_dataset('sbintuitions/JamC-QA', 'v1.0')
>>> print(jamcqa)
DatasetDict({
    test: Dataset({
        features: ['qid', 'category', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'answer_index'],
        num_rows: 2309
    })
    dev: Dataset({
        features: ['qid', 'category', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'answer_index'],
        num_rows: 32
    })
})
>>> jamcqa_test = jamcqa['test']
>>> print(jamcqa_test)
Dataset({
    features: ['qid', 'category', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'answer_index'],
    num_rows: 2309
})
>>> print(jamcqa_test[0])
{'qid': 'jamcqa-test-culture-00001', 'category': 'culture', 'question': '「狂った世で気が狂うなら気は確かだ」の名言を残した映画はどれ?', 'choice0': '影武者', 'choice1': '羅生門', 'choice2': '隠し砦の三悪人', 'choice3': '乱', 'answer_index': 3}
>>> 

Evaluation with FlexEval

You can easily use FlexEval (version 0.13.3 or later) to evaluate the JamC-QA score by simply replacing commonsense_qa with jamcqa in the Quickstart guide.

Run Command

flexeval_lm \
  --language_model HuggingFaceLM \
  --language_model.model "sbintuitions/sarashina2.2-0.5b" \
  --language_model.default_gen_kwargs "{ do_sample: false }" \
  --eval_setup "jamcqa" \
  --save_dir "results/jamcqa"

--language_model.default_gen_kwargs "{ do_sample: false }" disables sampling and performs greedy search.

Output

...
2025-09-03 15:48:24.633 | INFO     | flexeval.core.evaluate_generation:evaluate_generation:92 - {'exact_match': 0.2368990905153746, 'finish_reason_ratio-stop': 1.0, 'avg_output_length': 6.94283239497618, 'max_output_length': 93, 'min_output_length': 2}
...

Citation Information

@inproceedings{Oka2025,
  author={岡 照晃, 柴田 知秀, 吉田 奈央},
  title={JamC-QA: 日本固有の知識を問う多肢選択式質問応答ベンチマークの構築},
  year={2025},
  month={March},
  booktitle={言語処理学会第31回年次大会(NLP2025)},
  pages={839--844},
}