Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ dataset_info:
|
|
32 |
num_examples: 3
|
33 |
download_size: 317560335
|
34 |
dataset_size: 127269024
|
35 |
-
- config_name:
|
36 |
features:
|
37 |
- name: doc_key
|
38 |
dtype: string
|
@@ -79,10 +79,10 @@ license: cc-by-sa-4.0
|
|
79 |
<img src="assets/bookcoref.png" width="700">
|
80 |
</div>
|
81 |
<!-- Aggiungi nome degli autori, ACL 2025, link -->
|
82 |
-
This data repository contains the <span style="font-variant: small-caps;">BookCoref</span> dataset, introduced in the paper "<span style="font-variant: small-caps;">BookCoref</span>: Coreference Resolution at Book Scale" by <a href="https://arxiv.org/
|
83 |
|
84 |
We release both the manually-annotated `test` split (<span style="font-variant: small-caps;">BookCoref</span><sub>gold</sub>) and the pipeline-generated `train` and `validation` splits (<span style="font-variant: small-caps;">BookCoref</span><sub>silver</sub>).
|
85 |
-
In order to enable the replication of our results, we also release the
|
86 |
<!-- As specified in the paper, this version is obtained through chunking the text into contiguous windows of 1500 tokens, retaining the coreference clusters of each window. -->
|
87 |
|
88 |
## ⚠️ Project Gutenberg license disclaimer
|
@@ -98,7 +98,7 @@ Users are responsible for checking the copyright status of each book in their co
|
|
98 |
To use the <span style="font-variant: small-caps;">BookCoref</span> dataset, you need to install the following Python packages in your environment:
|
99 |
|
100 |
```bash
|
101 |
-
pip install "datasets<=3.6.0"
|
102 |
```
|
103 |
|
104 |
You can then load each configuration through Huggingface's `datasets` library:
|
@@ -107,7 +107,7 @@ You can then load each configuration through Huggingface's `datasets` library:
|
|
107 |
from datasets import load_dataset
|
108 |
|
109 |
bookcoref = load_dataset("sapienzanlp/bookcoref")
|
110 |
-
|
111 |
```
|
112 |
|
113 |
These commands will download and preprocess the books, add the coreference annotations, and return a `DatasetDict` according to the requested configuration.
|
@@ -127,7 +127,7 @@ DatasetDict({
|
|
127 |
num_rows: 3
|
128 |
})
|
129 |
})
|
130 |
-
>>>
|
131 |
DatasetDict({
|
132 |
train: Dataset({
|
133 |
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
|
@@ -144,8 +144,6 @@ DatasetDict({
|
|
144 |
})
|
145 |
```
|
146 |
|
147 |
-
### Local Download
|
148 |
-
To locally download the dataset as a jsonlines file, follow the procedure on our [official GitHub repo](http://github.com/sapienzanlp/bookcoref).
|
149 |
|
150 |
## ℹ️ Data format
|
151 |
|
|
|
32 |
num_examples: 3
|
33 |
download_size: 317560335
|
34 |
dataset_size: 127269024
|
35 |
+
- config_name: split
|
36 |
features:
|
37 |
- name: doc_key
|
38 |
dtype: string
|
|
|
79 |
<img src="assets/bookcoref.png" width="700">
|
80 |
</div>
|
81 |
<!-- Aggiungi nome degli autori, ACL 2025, link -->
|
82 |
+
This data repository contains the <span style="font-variant: small-caps;">BookCoref</span> dataset, introduced in the paper "<span style="font-variant: small-caps;">BookCoref</span>: Coreference Resolution at Book Scale" by <a href="https://arxiv.org/"> Martinelli et al. (2025)</a>, presented at the <a href="https://2025.aclweb.org/">ACL 2025</a> conference.
|
83 |
|
84 |
We release both the manually-annotated `test` split (<span style="font-variant: small-caps;">BookCoref</span><sub>gold</sub>) and the pipeline-generated `train` and `validation` splits (<span style="font-variant: small-caps;">BookCoref</span><sub>silver</sub>).
|
85 |
+
In order to enable the replication of our results, we also release the split version of each split as a separate configuration.
|
86 |
<!-- As specified in the paper, this version is obtained through chunking the text into contiguous windows of 1500 tokens, retaining the coreference clusters of each window. -->
|
87 |
|
88 |
## ⚠️ Project Gutenberg license disclaimer
|
|
|
98 |
To use the <span style="font-variant: small-caps;">BookCoref</span> dataset, you need to install the following Python packages in your environment:
|
99 |
|
100 |
```bash
|
101 |
+
pip install "datasets<=3.6.0" deepdiff spacy nltk
|
102 |
```
|
103 |
|
104 |
You can then load each configuration through Huggingface's `datasets` library:
|
|
|
107 |
from datasets import load_dataset
|
108 |
|
109 |
bookcoref = load_dataset("sapienzanlp/bookcoref")
|
110 |
+
bookcoref_split = load_dataset("sapienzanlp/bookcoref", name="split")
|
111 |
```
|
112 |
|
113 |
These commands will download and preprocess the books, add the coreference annotations, and return a `DatasetDict` according to the requested configuration.
|
|
|
127 |
num_rows: 3
|
128 |
})
|
129 |
})
|
130 |
+
>>> bookcoref_split
|
131 |
DatasetDict({
|
132 |
train: Dataset({
|
133 |
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
|
|
|
144 |
})
|
145 |
```
|
146 |
|
|
|
|
|
147 |
|
148 |
## ℹ️ Data format
|
149 |
|