
Books Named Entity Recognition (NER) Dataset
A small Named‑Entity‑Recognition (NER) corpus built from categories contained in Project Gutenberg’s public catalogues. It is intended for training or benchmarking entity extractors such as GLiNER on bibliographic metadata.
1 Provenance
This dataset provenance originates from Project Gutenberg's public catalogue.
2 Quick facts
Records (total) | 2364 |
Train split | 2127 queries |
Eval split | 237 queries |
NER labels | category |
Source | gutenberg.org |
Language coverage | English only |
License | CC0 1.0 Public‑Domain Dedication |
3 Dataset description
3.1 Origin of the data
- We parsed Project Gutenberg's public catalogues and turned each category into several synthetic “user queries” (patterns such as “Find books related to the category **[category]**” or “I'm interested in reading more about the book type **[category]**”).
3.2 Fields
Field | Type | Description |
---|---|---|
tokenized_text |
List[str] |
The query tokens. |
ner |
List[[int,int,str]] |
Spans in token indices (inclusive) and the label category . |
Example:
{"tokenized_text":["find","books","related","to","the","category","archaeology","."],"ner":[{"start":6,"end":6,"label":"category"}]}
3.3 Splits
The corpus is provided in two separate files:
- entities_train.jsonl — 2127 synthetic queries
- eval.jsonl — 237 synthetic queries
4 How to load
from datasets import load_dataset
ds_train = load_dataset("empathyai/books-ner-dataset-categories", split="train")
ds_eval = load_dataset("empathyai/books-ner-dataset-categories", split="eval")
5 Licensing and legal basis (EU)
5.1 Why the data are in the public domain
- Factual nature — Book titles and author names are mere facts. Under EU law (CJEU Infopaq C‑5/08 and Art. 2 InfoSoc‑Directive 2001/29/EC) facts and very short expressions lacking originality are not protected by copyright.
- No sui generis database right — The maker of Project Gutenberg’s catalogue is a U.S. entity; the EU database right (Directive 96/9/EC) only protects databases whose maker is established in the EU.
5.2 License chosen
To reflect the public‑domain status worldwide we apply the Creative Commons CC0 1.0 Public‑Domain Dedication. You may copy, modify, distribute and use the dataset for any purpose without asking permission.
5.3 Trademarks and attribution
- “Project Gutenberg” is a registered trademark of the Project Gutenberg Literary Archive Foundation. This dataset is not endorsed by or affiliated with PG.
- If you build on this dataset, please leave this disclaimer intact and do not use “Project Gutenberg” in a way that suggests endorsement.
6 Acknowledgements
Project Gutenberg volunteers for maintaining the free catalogue; HuggingFace for the dataset hosting.
- Downloads last month
- 33
Models trained or fine-tuned on empathyai/books-ner-dataset-categories

Token Classification
•
Updated