Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -79,23 +79,72 @@ license: cc-by-sa-4.0
|
|
79 |
<img src="assets/bookcoref.png" width="700">
|
80 |
</div>
|
81 |
<!-- Aggiungi nome degli autori, ACL 2025, link -->
|
82 |
-
This repository contains the <span style="font-variant: small-caps;">BookCoref</span> dataset, introduced in the paper "<span style="font-variant: small-caps;">BookCoref</span>: Coreference Resolution at Book Scale" by
|
83 |
-
|
84 |
|
85 |
We release both the manually-annotated `test` split (<span style="font-variant: small-caps;">BookCoref</span><sub>gold</sub>) and the pipeline-generated `train` and `validation` splits (<span style="font-variant: small-caps;">BookCoref</span><sub>silver</sub>).
|
86 |
-
In order to enable the replication of our results, we also release the splitted version of each split
|
87 |
-
As specified in the paper, this version is obtained through chunking the text into contiguous windows of 1500 tokens, retaining the coreference clusters of each window.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
|
89 |
## 📚 Quickstart
|
90 |
|
91 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
```python
|
94 |
from datasets import load_dataset
|
95 |
|
96 |
bookcoref = load_dataset("sapienzanlp/bookcoref")
|
|
|
97 |
```
|
98 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
## ℹ️ Data format
|
100 |
|
101 |
<span style="font-variant: small-caps;">BookCoref</span> is a collection of annotated books.
|
@@ -103,23 +152,25 @@ Each item contains the annotations of one book following the structure of OntoNo
|
|
103 |
|
104 |
```python
|
105 |
{
|
106 |
-
doc_id: "
|
107 |
-
|
108 |
-
|
|
|
109 |
characters: [
|
110 |
{
|
111 |
-
name: "Mr
|
112 |
-
cluster: [[
|
113 |
},
|
114 |
{
|
115 |
name: "Mr. Darcy",
|
116 |
-
cluster: [[
|
117 |
}
|
118 |
-
] # list[character], list of characters objects
|
119 |
}
|
120 |
```
|
121 |
<!-- Add description of fields in example, maybe OntoNotes format is not enough -->
|
122 |
-
We also include
|
|
|
123 |
|
124 |
## 📊 Dataset statistics
|
125 |
|
@@ -130,6 +181,7 @@ We also include information on character names, which is not exploited in tradit
|
|
130 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/64f85270ceabf1e6fc524bb8/DgYU_2yKlZuwDTV-duGWh.png" width=1000/>
|
131 |
</div>
|
132 |
|
|
|
133 |
## 🖋️ Cite this work
|
134 |
|
135 |
This work has been published at ACL 2025 (main conference). If you use any artifact of this dataset, please consider citing our paper as follows:
|
@@ -152,4 +204,5 @@ This work has been published at ACL 2025 (main conference). If you use any artif
|
|
152 |
|
153 |
## ©️ License information
|
154 |
|
155 |
-
All the annotations provided by this repository are licensed under the [Creative Commons Attribution Share Alike 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
|
|
|
|
79 |
<img src="assets/bookcoref.png" width="700">
|
80 |
</div>
|
81 |
<!-- Aggiungi nome degli autori, ACL 2025, link -->
|
82 |
+
This data repository contains the <span style="font-variant: small-caps;">BookCoref</span> dataset, introduced in the paper "<span style="font-variant: small-caps;">BookCoref</span>: Coreference Resolution at Book Scale" by <a href="https://arxiv.org/"> Martinelli et al. (2025)</a>, presented at the <a href="https://2025.aclweb.org/">ACL 2025</a> conference.
|
|
|
83 |
|
84 |
We release both the manually-annotated `test` split (<span style="font-variant: small-caps;">BookCoref</span><sub>gold</sub>) and the pipeline-generated `train` and `validation` splits (<span style="font-variant: small-caps;">BookCoref</span><sub>silver</sub>).
|
85 |
+
In order to enable the replication of our results, we also release the splitted version of each split as a separate configuration.
|
86 |
+
<!-- As specified in the paper, this version is obtained through chunking the text into contiguous windows of 1500 tokens, retaining the coreference clusters of each window. -->
|
87 |
+
|
88 |
+
## ⚠️ Project Gutenberg license disclaimer
|
89 |
+
|
90 |
+
<span style="font-variant: small-caps;">BookCoref</span> is based on books from Project Gutenberg, which are publicly available under the [Project Gutenberg License](https://www.gutenberg.org/policy/license.html).
|
91 |
+
This license holds for users located in the United States, where the books are in the public domain.
|
92 |
+
|
93 |
+
We do not distribute the original text of the books, rather our dataset consists of a script that downloads and preprocesses the books from an archived verion of Project Gutenberg through the [Wayback Machine](https://web.archive.org/).
|
94 |
+
Users are responsible for checking the copyright status of each book in their country.
|
95 |
|
96 |
## 📚 Quickstart
|
97 |
|
98 |
+
To use the <span style="font-variant: small-caps;">BookCoref</span> dataset, you need to install the following Python packages in your environment:
|
99 |
+
|
100 |
+
```bash
|
101 |
+
pip install "datasets<=3.6.0" deepdiff spacy nltk
|
102 |
+
```
|
103 |
+
|
104 |
+
You can then load each configuration through Huggingface's `datasets` library:
|
105 |
|
106 |
```python
|
107 |
from datasets import load_dataset
|
108 |
|
109 |
bookcoref = load_dataset("sapienzanlp/bookcoref")
|
110 |
+
bookcoref_splitted = load_dataset("sapienzanlp/bookcoref", name="splitted")
|
111 |
```
|
112 |
|
113 |
+
These commands will download and preprocess the books, add the coreference annotations, and return a `DatasetDict` according to the requested configuration.
|
114 |
+
```python
|
115 |
+
>>> bookcoref
|
116 |
+
DatasetDict({
|
117 |
+
train: Dataset({
|
118 |
+
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
|
119 |
+
num_rows: 45
|
120 |
+
})
|
121 |
+
validation: Dataset({
|
122 |
+
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
|
123 |
+
num_rows: 5
|
124 |
+
})
|
125 |
+
test: Dataset({
|
126 |
+
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
|
127 |
+
num_rows: 3
|
128 |
+
})
|
129 |
+
})
|
130 |
+
>>> bookcoref_splitted
|
131 |
+
DatasetDict({
|
132 |
+
train: Dataset({
|
133 |
+
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
|
134 |
+
num_rows: 7544
|
135 |
+
})
|
136 |
+
validation: Dataset({
|
137 |
+
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
|
138 |
+
num_rows: 398
|
139 |
+
})
|
140 |
+
test: Dataset({
|
141 |
+
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
|
142 |
+
num_rows: 152
|
143 |
+
})
|
144 |
+
})
|
145 |
+
```
|
146 |
+
|
147 |
+
|
148 |
## ℹ️ Data format
|
149 |
|
150 |
<span style="font-variant: small-caps;">BookCoref</span> is a collection of annotated books.
|
|
|
152 |
|
153 |
```python
|
154 |
{
|
155 |
+
doc_id: "pride_and_prejudice_1342", # (str) i.e., ID of the document
|
156 |
+
gutenberg_key: "1342", # (str) i.e., key of the book in Project Gutenberg
|
157 |
+
sentences: [["CHAPTER", "I."], ["It", "is", "a", "truth", "universally", "acknowledged", ...], ...], # list[list[str]] i.e., list of word-tokenized sentences
|
158 |
+
clusters: [[[79,80], [81,82], ...], [[2727,2728]...], ...], # list[list[list[int]]] i.e., list of clusters' mention offsets
|
159 |
characters: [
|
160 |
{
|
161 |
+
name: "Mr Bennet",
|
162 |
+
cluster: [[79,80], ...],
|
163 |
},
|
164 |
{
|
165 |
name: "Mr. Darcy",
|
166 |
+
cluster: [[2727,2728], [2729,2730], ...],
|
167 |
}
|
168 |
+
] # list[character], list of characters objects consisting of name and mentions offsets, i,e., dict[name: str, cluster: list[list[int]]]
|
169 |
}
|
170 |
```
|
171 |
<!-- Add description of fields in example, maybe OntoNotes format is not enough -->
|
172 |
+
We also include character names, which are not exploited in traditional coreference settings but could inspire future directions in Coreference Resolution.
|
173 |
+
|
174 |
|
175 |
## 📊 Dataset statistics
|
176 |
|
|
|
181 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/64f85270ceabf1e6fc524bb8/DgYU_2yKlZuwDTV-duGWh.png" width=1000/>
|
182 |
</div>
|
183 |
|
184 |
+
|
185 |
## 🖋️ Cite this work
|
186 |
|
187 |
This work has been published at ACL 2025 (main conference). If you use any artifact of this dataset, please consider citing our paper as follows:
|
|
|
204 |
|
205 |
## ©️ License information
|
206 |
|
207 |
+
All the annotations provided by this repository are licensed under the [Creative Commons Attribution Share Alike 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
|
208 |
+
<!-- The tokenized text of books is a modification of books from Project Gutenberg, following [their license](https://www.gutenberg.org/policy/license.html). -->
|