Japanese Bar Examination QA Dataset v2
Dataset Summary
This is an enhanced version of the Japanese Bar Examination QA dataset, containing question-answer pairs from the Japanese Bar Examination (司法試験, Shihou Shiken) spanning from 2015 to 2024. The dataset covers three major areas of Japanese law: Criminal Law (刑法), Constitutional Law (憲法), and Civil Law (民法).
This version (v2) includes significant improvements in data extraction and processing, with enhanced support for complex question formats and better contextual information.
What's New in v2
Key Improvements over v1
- Enhanced Question Extraction: Improved XML parsing that correctly handles complex question formats and edge cases
- Lead-in Context Support: Added support for questions with contextual lead-in text (見解, opinions) using the new
lead_infield - Remark Annotations: Separated important annotations from instructions into a dedicated
remarkfield for better clarity - Improved Instruction Clarity: More explicit and standardized instruction text for better understanding
- Increased Dataset Size: 3,464 questions (vs. 2,846 in v1) - a 21.7% increase
- Better Data Quality: Fixed mapping issues between questions and answers, eliminated gold label mismatches
Technical Enhancements
- Lead-in Text Extraction: 211 questions (6.1%) now include contextual lead-in text that was previously lost
- Remark Annotations: 53 questions (1.5%) include important annotations separated from instructions
- Comprehensive Instructions: 3464 questions (100.0%) have clear, standardized instructions
- Extended Coverage: Additional questions from years that were previously incompletely processed
Dataset Details
- Total Questions: 3,464
- Time Period: 2015-2024 (10 years)
- Languages: Japanese
- Task: Binary classification (True/False questions)
- Domains: Legal, specifically Japanese law
Data Fields
id: Unique identifier for each question (format: year-subject-question_number-sub_number)year: Examination year in Japanese era format (e.g., "令和元年", "平成27年")subject: Legal subject in English (Criminal Law, Constitutional Law, Civil Law)subject_jp: Legal subject in Japanese (刑法, 憲法, 民法)theme: Specific legal topic/theme in Japanesequestion_type: Type of question (CSQ, MRQ, TFQ)instruction: Clear instructions for answering the questionlead_in: [NEW] Contextual lead-in text providing background information (e.g., legal opinions, case scenarios)remark: [NEW] Important annotations and clarifications separated from instructionsquestion: Question text in Japaneselabel: Original label (Y for True, N for False)answer: Standardized answer (True/False)
Data Splits
The dataset is split into three parts:
- Train: 2771 questions (80%)
- Validation: 346 questions (10%)
- Test: 347 questions (10%)
Subject Distribution
| Subject | Count | Percentage |
|---|---|---|
| Civil Law | 1,998 | 57.7% |
| Criminal Law | 811 | 23.4% |
| Constitutional Law | 655 | 18.9% |
Answer Distribution
| Answer | Count | Percentage |
|---|---|---|
| False | 1,814 | 52.4% |
| True | 1,650 | 47.6% |
Question Types
| Type | Description | Count | Percentage |
|---|---|---|---|
| CSQ | Choice of Single Question | 2,396 | 69.2% |
| MRQ | Multiple Response Question | 593 | 17.1% |
| TFQ | True/False Question | 475 | 13.7% |
New Features in v2
Lead-in Context
211 questions (6.1%) now include lead-in text that provides essential context:
- Legal Opinions (見解): Different theoretical perspectives on legal issues
- Case Scenarios (事例): Factual backgrounds for legal analysis
- Comparative Views: Multiple viewpoints on legal interpretations
Example with lead-in:
{
"id": "令和6年-刑法-11-1",
"lead_in": "【見解】A説:行為者が、やろうと思えばできたが中止した場合を中止犯とし...",
"question": "A説の立場からは、中止行為が反省・悔悟等の自己の行為に対する否定的な感情に基づく場合に限り、中止犯が成立する。",
"instruction": "次の各【見解】に対して記述が正しいか"
}
Remark Annotations
53 questions (1.5%) include important clarifications:
- Assumptions: Conditions to assume when answering (e.g., "Bは、無資力であり...")
- Legal Frameworks: Specific legal provisions to consider
- Contextual Notes: Additional information for proper interpretation
Example with remark:
{
"id": "令和5年-民法-18-1",
"instruction": "判例の趣旨に照らし記述が正しいか",
"remark": "Bは、無資力であり、各行為が債権者を害することを知っていたものとする。"
}
Usage Examples
Loading the Dataset
from datasets import load_dataset
dataset = load_dataset("nguyenthanhasia/japanese-bar-exam-qa-v2")
# Access different splits
train_data = dataset["train"]
validation_data = dataset["validation"]
test_data = dataset["test"]
Example Data Point
{
"id": "令和3年-民法-24-3",
"year": "令和3年",
"subject": "Civil Law",
"subject_jp": "民法",
"theme": "贈与",
"question_type": "CSQ",
"instruction": "判例の趣旨に照らし記述が正しいか",
"lead_in": "",
"remark": "",
"question": "受贈者は,贈与契約が書面によらない場合であっても,履行の終わっていない部分について贈与契約を解除することができない。",
"label": "N",
"answer": "False"
}
Applications
This enhanced dataset can be used for:
- Advanced Legal NLP Research: With improved context through lead-in text and remarks
- Legal Reasoning Systems: Better understanding of legal argumentation patterns
- Educational AI Tools: More comprehensive study materials for law students
- Legal AI Development: Training models with richer contextual information
- Cross-lingual Legal Studies: Enhanced comparative legal research capabilities
Improvements from v1
Data Quality Enhancements
- Fixed Gold Label Issues: Resolved answer mapping problems identified in v1
- Enhanced Extraction: Improved handling of complex question formats
- Better Parsing: More robust XML processing with error handling
- Standardized Instructions: Clearer and more consistent instruction text
Coverage Expansion
- Additional Questions: 618 new questions extracted from previously incomplete processing
- Better Representation: More balanced coverage across years and subjects
- Contextual Information: Previously lost contextual information now preserved
Data Source
The data originates from official Japanese Bar Examination questions administered by the Ministry of Justice of Japan. This version includes enhanced processing of the original examination materials with improved extraction techniques.
Version History
- v2.0: Enhanced extraction with lead-in context, remark annotations, and improved data quality
- v1.0: Initial release with basic question-answer pairs
Considerations for Use
Limitations
- The dataset is in Japanese and requires Japanese language processing capabilities
- Questions are based on Japanese legal system and may not be applicable to other legal systems
- Some questions may require deep understanding of Japanese legal precedents and context
- The dataset reflects the legal understanding and interpretations at the time of each examination
Ethical Considerations
- This dataset is intended for research and educational purposes
- Users should be aware that legal interpretations may change over time
- The dataset should not be used as a substitute for professional legal advice
- Proper attribution should be given to the original source (Japanese Ministry of Justice)
Citation
If you use this dataset in your research, please cite:
@misc{japanese_bar_exam_qa_v2_2025,
title = {Japanese Bar Examination QA Dataset v2},
author = {Fumihito Nishino and Nguyen Ha Thanh and Ken Satoh},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/nguyenthanhasia/japanese-bar-exam-qa-v2}},
note = {Publisher: Hugging Face}
}
License
This dataset is released under the CC BY 4.0 license.
The underlying content originates from official government sources that are in the public domain. Please refer to the original source for their terms of use.
Contact
For questions or issues regarding this dataset, please open an issue in the dataset repository.
Related Datasets
- japanese-bar-exam-qa v1: Original version of this dataset
- Downloads last month
- 208