Datasets:
language:
- en
license: cc-by-nc-sa-4.0
size_categories:
- 10K<n<100K
pretty_name: "DAPFAM –\_Domain‑Aware Patent Retrieval at the Family level"
tags:
- patents
- retrieval
- information‑retrieval
- cross‑domain
- patent
- fulltext
task_categories:
- text-retrieval
configs:
- config_name: corpus
data_files: corpus.parquet
- config_name: queries
data_files: queries.parquet
- config_name: relations
data_files: qrels_all.parquet
DAPFAM dataset
What’s new (Sept 2025) — DAPFAM patent family retrieval tasks are now in MTEB. 18 tasks (ALL / IN / OUT × 3 query views × 3 target views) are available, including the 6 main ones used in our paper. You can benchmark any model with a single script and reproduce the paper’s results by selecting the same encoder (Snowflake/snowflake-arctic-embed-m-v2.0). Our paper used int8 quantization for hardware reasons; results may vary very slightly (not significantly) if you run in float16/32.
DAPFAM — A Domain‑Aware Family‑level Dataset to benchmark cross‑domain patent retrieval
License: CC‑BY‑NC‑SA‑4.0
Tasks: text‑retrieval (patent family prior‑art retrieval)
Languages: English (eng‑Latn)
Evaluation date span: 1964‑06‑26 → 2023‑06‑20
Cite: Ayaou et al., 2025 — DAPFAM: A Domain‑Aware Family‑level Dataset to benchmark cross‑domain patent retrieval (arXiv:2506.22141)
Summary
DAPFAM provides 1,247 query patent families and 45,336 target families with citation‑based relevance and explicit domain labels (IN/OUT). Each positive pair is IN‑domain if query and target share at least one IPC3 code, OUT‑domain otherwise. Text is at family‑level full text (title, abstract, claims, description). Supports both document- and passage‑level retrieval.
What makes DAPFAM different?
- Explicit domain partitions (IN vs OUT) → enables true cross‑domain evaluation.
- Family‑level aggregation → reduces cross‑jurisdiction redundancy.
- Compute‑aware → Small enough to support passage level experimentations on consumer-grade hardware.
Benchmark DAPFAM on MTEB
18 retrieval tasks have been added (ALL / IN / OUT × 3 query × 3 target field views). Six of them were directly evaluated in the paper.
Task naming scheme
- Query view: TA (Title+Abstract) or TAC (Title+Abstract+Claims)
- Target view: TA, TAC, or FullText (adds description)
- Subsets: ALL, IN, OUT (IPC overlap filtering)
Task list (18 total)
ALL
DAPFAMAllTitlAbsToTitlAbsRetrievalDAPFAMAllTitlAbsToTitlAbsClmRetrieval(in-paper)DAPFAMAllTitlAbsToFullTextRetrievalDAPFAMAllTitlAbsClmToTitlAbsRetrievalDAPFAMAllTitlAbsClmToTitlAbsClmRetrieval(in-paper)DAPFAMAllTitlAbsClmToFullTextRetrieval
IN
DAPFAMInTitlAbsToTitlAbsRetrievalDAPFAMInTitlAbsToTitlAbsClmRetrieval(in-paper)DAPFAMInTitlAbsToFullTextRetrievalDAPFAMInTitlAbsClmToTitlAbsRetrievalDAPFAMInTitlAbsClmToTitlAbsClmRetrieval(in-paper)DAPFAMInTitlAbsClmToFullTextRetrieval
OUT
DAPFAMOutTitlAbsToTitlAbsRetrievalDAPFAMOutTitlAbsToTitlAbsClmRetrieval(in-paper)DAPFAMOutTitlAbsToFullTextRetrievalDAPFAMOutTitlAbsClmToTitlAbsRetrievalDAPFAMOutTitlAbsClmToTitlAbsClmRetrieval(in-paper)DAPFAMOutTitlAbsClmToFullTextRetrieval
Quick start — run all tasks
import mteb
from sentence_transformers import SentenceTransformer
model_name = "Snowflake/snowflake-arctic-embed-m-v2.0"
model = SentenceTransformer(model_name, trust_remote_code=True,
model_kwargs={"torch_dtype":"float16"}).cuda().eval()
task_names = [
# ALL
"DAPFAMAllTitlAbsToTitlAbsRetrieval",
"DAPFAMAllTitlAbsToTitlAbsClmRetrieval",
"DAPFAMAllTitlAbsToFullTextRetrieval",
"DAPFAMAllTitlAbsClmToTitlAbsRetrieval",
"DAPFAMAllTitlAbsClmToTitlAbsClmRetrieval",
"DAPFAMAllTitlAbsClmToFullTextRetrieval",
# IN
"DAPFAMInTitlAbsToTitlAbsRetrieval",
"DAPFAMInTitlAbsToTitlAbsClmRetrieval",
"DAPFAMInTitlAbsToFullTextRetrieval",
"DAPFAMInTitlAbsClmToTitlAbsRetrieval",
"DAPFAMInTitlAbsClmToTitlAbsClmRetrieval",
"DAPFAMInTitlAbsClmToFullTextRetrieval",
# OUT
"DAPFAMOutTitlAbsToTitlAbsRetrieval",
"DAPFAMOutTitlAbsToTitlAbsClmRetrieval",
"DAPFAMOutTitlAbsToFullTextRetrieval",
"DAPFAMOutTitlAbsClmToTitlAbsRetrieval",
"DAPFAMOutTitlAbsClmToTitlAbsClmRetrieval",
"DAPFAMOutTitlAbsClmToFullTextRetrieval",
]
tasks = mteb.get_tasks(tasks=task_names)
results = mteb.MTEB(tasks=tasks).run(
model,
output_folder=f"mteb_res/{model_name}",
encode_kwargs={"batch_size": 16, "prompt_name": None}
)
print(results)
To reproduce the paper’s reported MTEB-compatible results, restrict to the six in-paper tasks listed above. The encoder was run with int8 quantization in the paper; float16 runs on GPU may differ slightly.
How to Load the Dataset
from datasets import load_dataset
dc = load_dataset("datalyes/DAPFAM_patent", "corpus") # 45,336 targets
dq = load_dataset("datalyes/DAPFAM_patent", "queries") # 1,247 queries
dr = load_dataset("datalyes/DAPFAM_patent", "relations") # qrels: all/in/out
Counts
- Queries: 1,247
- Targets: 45,336
- Qrels (all): ≈49,869 (positives + sampled negatives)
- Positive qrels: IN ~19,736, OUT ~5,193
Evaluation choices
- Metrics: NDCG@100 (primary), Recall@100 (secondary).
- Document-level views in MTEB; paper also explores passage-level retrieval and RRF fusion separately.
- Encoder:
Snowflake/snowflake-arctic-embed-m-v2.0; in-paper runs quantized to int8 for efficiency.
Citation
@misc{ayaou2025dapfamdomainawarefamilyleveldataset,
title={DAPFAM: A Domain-Aware Family-level Dataset to benchmark cross domain patent retrieval},
author={Iliass Ayaou and Denis Cavallucci and Hicham Chibane},
year={2025},
eprint={2506.22141},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.22141},
}