Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Books Named Entity Recognition (NER) Dataset

A lightweight Named‑Entity‑Recognition (NER) corpus built from titles and author names contained in Project Gutenberg’s public catalogues. It is intended for training or benchmarking entity extractors such as Gliner on bibliographic metadata.


1 Provenance

This dataset provenance originates from Project Gutenberg's public catalogue.


2  Quick facts

Records (total) 434 925
Train split 391 432 queries
Eval split 43 493 queries
Tokens / record 6 – 30 (median ≈ 14)
NER labels titleauthor
Source GUTINDEX.ALL and yearly GUTINDEX.20xx files downloaded 18 April 2025 from rsync://ftp.gutenberg.org
Language coverage English only
License CC0 1.0 Public‑Domain Dedication

3  Dataset description

3.1  Origin of the data

  • Project Gutenberg publishes yearly plain‑text catalogues (GUTINDEX) listing eBook ID – Title – Author. These catalogues are public and do not contain the full texts.
  • We parsed those files and turned each one into several synthetic “user queries” (patterns such as “Looking for [Title] from [Author].” or “Any recommendations by [Author]?”).

3.2  Fields

Field Type Description
tokenized_text List[str] The query tokens.
ner List[[int,int,str]] Spans in token indices (inclusive) and the label (title / author).

Example:

{
  "tokenized_text": [
    "Looking", "for", "La", "conqueste", "du", "chasteau", "d", "'", "amours", "conquestee", "par", "l", "'", "umilité", "du", "beau", "doulx", ",", "the", "title", "from", "Anonymous", "."
  ],
  "ner": [
    [2, 16, "title"],
    [21, 21, "author"]
  ]
}

3.3  Splits

The corpus is provided in two separate files:

  • train.jsonl — 391 432 synthetic queries
  • eval.jsonl  — 43 493 synthetic queries

4  How to load

from datasets import load_dataset

ds_train = load_dataset("empathyai/books-ner-dataset", split="train")
ds_eval  = load_dataset("empathyai/books-ner-dataset", split="eval")

5  Licensing and legal basis (EU)

5.1  Why the data are in the public domain

  1. Factual nature — Book titles and author names are mere facts. Under EU law (CJEU Infopaq C‑5/08 and Art. 2 InfoSoc‑Directive 2001/29/EC) facts and very short expressions lacking originality are not protected by copyright.
  2. No sui generis database right — The maker of Project Gutenberg’s catalogue is a U.S. entity; the EU database right (Directive 96/9/EC) only protects databases whose maker is established in the EU.

5.2  License chosen

To reflect the public‑domain status worldwide we apply the Creative Commons CC0 1.0 Public‑Domain Dedication. You may copy, modify, distribute and use the dataset for any purpose without asking permission.

5.3  Trademarks and attribution

  • “Project Gutenberg” is a registered trademark of the Project Gutenberg Literary Archive Foundation. This dataset is not endorsed by or affiliated with PG.
  • If you build on this dataset, please leave this disclaimer intact and do not use “Project Gutenberg” in a way that suggests endorsement.

6  Acknowledgements

Project Gutenberg volunteers for maintaining the free catalogue; HuggingFace for the dataset hosting.

Downloads last month
16

Models trained or fine-tuned on empathyai/books-ner-dataset