dataset_info:
- config_name: full
features:
- name: doc_key
dtype: string
- name: gutenberg_key
dtype: string
- name: sentences
sequence:
sequence: string
- name: clusters
sequence:
sequence:
sequence: int64
- name: characters
list:
- name: name
dtype: string
- name: mentions
sequence:
sequence: int64
splits:
- name: train
num_bytes: 118643409
num_examples: 45
- name: validation
num_bytes: 5893208
num_examples: 5
- name: test
num_bytes: 2732407
num_examples: 3
download_size: 317560335
dataset_size: 127269024
- config_name: splitted
features:
- name: doc_key
dtype: string
- name: gutenberg_key
dtype: string
- name: sentences
sequence:
sequence: string
- name: clusters
sequence:
sequence:
sequence: int64
- name: characters
list:
- name: name
dtype: string
- name: mentions
sequence:
sequence: int64
splits:
- name: train
num_bytes: 118849212
num_examples: 7544
- name: validation
num_bytes: 5905814
num_examples: 398
- name: test
num_bytes: 2758250
num_examples: 152
download_size: 317560335
dataset_size: 127513276
language:
- en
pretty_name: BOOKCOREF
size_categories:
- 10M<n<100M
tags:
- coreference-resolution
license: cc-by-sa-4.0

We release both the manually-annotated test
split (BookCorefgold) and the pipeline-generated train
and validation
splits (BookCorefsilver).
In order to enable the replication of our results, we also release the splitted version of each split, available by adding the suffix _splitted
to each split name.
As specified in the paper, this version is obtained through chunking the text into contiguous windows of 1500 tokens, retaining the coreference clusters of each window.
📚 Quickstart
Simply load the dataset through Huggingface's datasets
library:
from datasets import load_dataset
bookcoref = load_dataset("sapienzanlp/bookcoref")
ℹ️ Data format
BookCoref is a collection of annotated books. Each item contains the annotations of one book following the structure of OntoNotes:
{
doc_id: "pride_and_prejudice_142", # (str) i.e., id of document
sentences: [["Pride", "and", "Prejudice", "."], ["Begin", ...], ...], # list[list[str]] i.e., list of word-tokenized sentences
clusters: [[[0,0], [3,5]], [[4,9]...], ...], # list[list[list[int]]] i.e., list of clusters' mention offsets
characters: [
{
name: "Mr.Bennet",
cluster: [[0,0], ...],
},
{
name: "Mr. Darcy",
cluster: [[5,7], ...],
}
] # list[character], list of characters objects with name and his mentions offsets, i,e., dict(name: str, cluster: list[list[int]])
}
We also include information on character names, which is not exploited in traditional coreference settings, but can be useful in future works.
📊 Dataset statistics
BookCoref has distinctly book-scale characteristics, as summarized in the following table:

🖋️ Cite this work
This work has been published at ACL 2025 (main conference). If you use any artifact of this dataset, please consider citing our paper as follows:
@inproceedings{martinelli-etal-2025-bookcoref,
title = "{BookCoref}: Coreference Resolution at Book Scale",
author = "Martinelli, Giuliano and
Bonomo, Tommaso and
Huguet Cabot, Pere-Llu{\'\i}s and
Navigli, Roberto",
booktitle = "Proceedings of the 63nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
©️ License information
All the annotations provided by this repository are licensed under the Creative Commons Attribution Share Alike 4.0 license. The tokenized text of books is a modification of books from Project Gutenberg, following their license.