metadata
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- legal
- legal-reasoning
- multiple-choice
configs:
- config_name: canada_tax_court_outcomes
data_files:
- split: train
path: canada_tax_court_outcomes/train-*
- split: test
path: canada_tax_court_outcomes/test-*
- config_name: citation_prediction_classification
data_files:
- split: train
path: citation_prediction_classification/train-*
- split: test
path: citation_prediction_classification/test-*
- config_name: diversity_3
data_files:
- split: train
path: diversity_3/train-*
- split: test
path: diversity_3/test-*
- config_name: diversity_5
data_files:
- split: train
path: diversity_5/train-*
- split: test
path: diversity_5/test-*
- config_name: diversity_6
data_files:
- split: train
path: diversity_6/train-*
- split: test
path: diversity_6/test-*
- config_name: jcrew_blocker
data_files:
- split: train
path: jcrew_blocker/train-*
- split: test
path: jcrew_blocker/test-*
dataset_info:
- config_name: canada_tax_court_outcomes
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 7864
num_examples: 6
- name: test
num_bytes: 392042
num_examples: 244
download_size: 161532
dataset_size: 399906
- config_name: citation_prediction_classification
features:
- name: answer
dtype: string
- name: citation
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 1471
num_examples: 2
- name: test
num_bytes: 60272
num_examples: 108
download_size: 30302
dataset_size: 61743
- config_name: diversity_3
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 3040
num_examples: 6
- name: test
num_bytes: 153782
num_examples: 300
download_size: 38926
dataset_size: 156822
- config_name: diversity_5
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 3520
num_examples: 6
- name: test
num_bytes: 177382
num_examples: 300
download_size: 45990
dataset_size: 180902
- config_name: diversity_6
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 5087
num_examples: 6
- name: test
num_bytes: 253115
num_examples: 300
download_size: 66869
dataset_size: 258202
- config_name: jcrew_blocker
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 16657
num_examples: 6
- name: test
num_bytes: 137273
num_examples: 54
download_size: 79424
dataset_size: 153930
DatologyAI/legalbench
Overview
This repository contains 26 legal reasoning tasks from LegalBench, processed for easy use in language model evaluation. Each task includes the original data as well as a formatted input column that can be directly fed to models for evaluation.
Task Categories
The tasks are organized into several categories:
Basic Legal Datasets
- canada_tax_court_outcomes
- jcrew_blocker
- learned_hands_benefits
- telemarketing_sales_rule
Citation Datasets
- citation_prediction_classification
Diversity Analysis Datasets
- diversity_3
- diversity_5
- diversity_6
Jurisdiction Datasets
- personal_jurisdiction
SARA Analysis Datasets
- sara_entailment
- sara_numeric
Supply Chain Disclosure Datasets
- supply_chain_disclosure_best_practice_accountability
- supply_chain_disclosure_best_practice_certification
- supply_chain_disclosure_best_practice_training
MAUD Contract Analysis Datasets
- maud_ability_to_consummate_concept_is_subject_to_mae_carveouts
- maud_additional_matching_rights_period_for_modifications_cor
- maud_change_in_law_subject_to_disproportionate_impact_modifier
- maud_changes_in_gaap_or_other_accounting_principles_subject_to_disproportionate_impact_modifier
- maud_cor_permitted_in_response_to_intervening_event
- maud_fls_mae_standard
- maud_includes_consistent_with_past_practice
- maud_initial_matching_rights_period_cor
- maud_ordinary_course_efforts_standard
- maud_pandemic_or_other_public_health_event_subject_to_disproportionate_impact_modifier
- maud_pandemic_or_other_public_health_event_specific_reference_to_pandemic_related_governmental_responses_or_measures
- maud_type_of_consideration
Task Details
Task | Type | Description |
---|---|---|
canada_tax_court_outcomes | multiple_choice | INSTRUCTIONS: Indicate whether the following judgment excerpt from a Tax Court of Canada decision... |
citation_prediction_classification | multiple_choice | Can the case can be used as a citation for the provided text? |
diversity_3 | multiple_choice | Diversity jurisdiction exists when there is (1) complete diversity between plaintiffs and defenda... |
diversity_5 | multiple_choice | Diversity jurisdiction exists when there is (1) complete diversity between plaintiffs and defenda... |
diversity_6 | multiple_choice | Diversity jurisdiction exists when there is (1) complete diversity between plaintiffs and defenda... |
jcrew_blocker | multiple_choice | The JCrew Blocker is a provision that typically includes (1) a prohibition on the borrower from t... |
learned_hands_benefits | multiple_choice | Does the post discuss public benefits and social services that people can get from the government... |
maud_ability_to_consummate_concept_is_subject_to_mae_carveouts | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
maud_additional_matching_rights_period_for_modifications_cor | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
maud_change_in_law_subject_to_disproportionate_impact_modifier | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
maud_changes_in_gaap_or_other_accounting_principles_subject_to_disproportionate_impact_modifier | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
maud_cor_permitted_in_response_to_intervening_event | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
maud_fls_mae_standard | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
maud_includes_consistent_with_past_practice | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
maud_initial_matching_rights_period_cor | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
maud_ordinary_course_efforts_standard | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
maud_pandemic_or_other_public_health_event_subject_to_disproportionate_impact_modifier | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
maud_pandemic_or_other_public_health_event_specific_reference_to_pandemic_related_governmental_responses_or_measures | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
maud_type_of_consideration | multiple_choice | Instruction: Read the segment of a merger agreement and answer the multiple-choice question by ch... |
personal_jurisdiction | multiple_choice | There is personal jurisdiction over a defendant in the state where the defendant is domiciled, or... |
sara_entailment | multiple_choice | Determine whether the following statements are entailed under the statute. |
sara_numeric | regression | Answer the following questions. |
supply_chain_disclosure_best_practice_accountability | multiple_choice | Task involving supply chain disclosures |
supply_chain_disclosure_best_practice_certification | multiple_choice | Task involving supply chain disclosures |
supply_chain_disclosure_best_practice_training | multiple_choice | Task involving supply chain disclosures |
telemarketing_sales_rule | multiple_choice | The Telemarketing Sales Rule is provided by 16 C.F.R. § 310.3(a)(1) and 16 C.F.R. § 310.3(a)(2). |
Data Format
Each dataset preserves its original columns and adds an input
column that contains the formatted prompt ready to be used with language models. The column structure varies by task category:
- Basic Legal Datasets: answer, index, text, input
- Citation Datasets: answer, citation, index, text, input
- Diversity Analysis Datasets: aic_is_met, answer, index, parties_are_diverse, text, input
- Jurisdiction Datasets: answer, index, slice, text, input
- SARA Analysis Datasets: answer, case id, description, index, question, statute, text, input
- Supply Chain Disclosure Datasets: answer, index, text, input
- MAUD Contract Analysis Datasets: answer, index, text, input
Usage
from datasets import load_dataset
# Load a specific task
task = load_dataset("DatologyAI/legalbench", "canada_tax_court_outcomes")
# Access the formatted input
example = task["test"][0]
print(example["input"])
# Access the correct answer
print(example["answer"])
Model Evaluation Example
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model_name = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Load a LegalBench task
task_name = "personal_jurisdiction"
dataset = load_dataset("DatologyAI/legalbench", task_name)
# Process an example
example = dataset["test"][0]
input_text = example["input"]
# Generate a response
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(
inputs["input_ids"],
max_new_tokens=10,
temperature=0.0
)
response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
# Check if correct
print(f"Gold answer: {example['answer']}")
print(f"Model response: {response}")
Citation
If you use this dataset, please cite both this repository and the original LegalBench paper:
@misc{legalbench_datology,
author = {DatologyAI},
title = {Processed LegalBench Dataset},
year = {2024},
publisher = {GitHub},
url = {https://huggingface.co/DatologyAI/legalbench}
}
@article{guha2023legalbench,
title={Legalbench: Foundational models for legal reasoning},
author={Guha, Neel and Gaur, Mayank and Garrido, Georgios and Ji, Fali and Zhang, Spencer and Pathak, Aditi and Arora, Shivam and Teng, Zhaobin and Mao, Chacha and Kornilova, Anastassia and others},
journal={arXiv preprint arXiv:2308.11462},
year={2023}
}
License
These datasets are derived from LegalBench and follow the same licensing as the original repository.