You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Books intent classification dataset

A prompt intent classification dataset built from titles, author names and categories (subjects) contained in Project Gutenberg.

Its main purpose is to finetune small language models on intent classification task.

Dataset Details

Dataset Description

  • Curated by: Empathy.co
  • Shared by: Project Gutenberg
  • Language(s) (NLP): English
  • License: CC0 1.0 Public‑Domain Dedication

Dataset Sources

Uses

This dataset is intended for finetuning Small Language Models on intent classification tasks. The LLM is instructed to generate a JSON that contains the expected intent, given a query in the books domain.

Direct Use

You can load this dataset with datasets library, then use it with transformers or unsloth to finetune a model.

Here is how to prepare a query using the same template format:

# Define instruction templates
QUERY_PROMPT_INTRODUCTION = """You're an expert in Project Gutenberg. Project Gutenberg (PG) is a volunteer effort to digitize and archive cultural works, as well as to "encourage the creation and distribution of eBooks. Most of the items in its collection are the full texts of books or individual stories in the public domain. Your main focus is to extract user intent."""


QUERY_PROMPT_TASK = """## Task
Given user input and context, extract the intent. 
* Consider user intent:
    * search_book: The user is looking for a specific book.
    * search_author: The user is looking for a specific author or its biography.
    * search_category: The user is looking for books of a category.
    * recommendation: User is looking for books suggestions, either similar to a title or from the same author.
    * novelties: User is looking for recently added books to the Project Gutenberg. Note that this is not the same as 'new books' in general, but rather books that have been added to the Project Gutenberg collection recently.
    * general_questions: The user is asking general questions about books, authors, or the Project Gutenberg collection. This includes questions like 'What are the characters in this book?' or 'What is the are some interesting details about that author?'.
    * out_of_domain: The user is asking something that is not related to books, the Project Gutenberg or its collection, like harmful requests or 'What's the weather like?'.

The result must be only a JSON with the following format:
{
    "chat_context": "refinement|new_request",
    "intent": "extracted_intent"
}
"""

def format_query(query:str)->str:
  return f"""{QUERY_PROMPT_INTRODUCTION}
{QUERY_PROMPT_TASK}

## Input
{query}

## Response
"""

Dataset Structure

The dataset contains the following fields:

  • intent: given a user query in the book domain, this field contains its expected intent. Here are the available intents:
    • search_book: the user is looking for a specific book.
    • search_author: the user is looking for a books of an author.
    • search_category: the user is looking for a books of a category.
    • recommendation: user is looking for books suggestions, either similar to a title or from the same author.
    • novelties: user is looking for recently added books to the Project Gutenberg. Note that this is not the same as "new books" in general, but rather books that have been added to the Project Gutenberg collection recently.
    • general_questions: the user is asking general questions about books, authors, or the Project Gutenberg collection. This includes questions like "What is the book about?" or "What is the author's biography?".
    • out_of_domain: the user is asking something that is not related to books, the Project Gutenberg or its collection, like harmful requests or "What's the weather like?".
  • messages: the user query, already formatted into a OpenAI's ChatML format for finetuning classification task. The prompt (first message) instructs an LLM to generate a JSON (second message) with the expected intent.

The dataset contains a train and test split, with the following entries per intent class:

Train set counts by intent:

Intent Count
general_questions 86221
novelties 19966
out_of_domain 20057
recommendation 87323
search_author 55998
search_book 87577
search_category 64211

Test set counts by intent:

Intent Count
general_questions 21457
novelties 5013
out_of_domain 5058
recommendation 21959
search_author 13978
search_book 21808
search_category 16066

Dataset Creation

Curation Rationale

The purpose of this dataset is to demonstrate the ability of Smaller Language Models (<1B) to outperform LLMs in specific tasks, while requiring fewer resources and with lower latency.

The goal is to scale better for production ready use cases without compromising quality.

Source Data

The data is built from the combination of two sources:

  • Gutenberg catalog: We downloaded the RDF and CSV offline catalogs provided by the Project Gutenberg.

  • Intent templates: a collection of hand-curated templates associated to an specific intent. The templates may contain {title}, {author} or {category} so that we can generate combinations of those.

Data Collection and Processing

The source data processing is summarized in the following steps:

  • Iterate the catalog and extract the following entities:
    • Author
    • Book titles
    • Categories (subjects)
  • Normalization: clean stopwords, punctuation, remove birth and year dates.
  • Template resolution: for each entity, the following is done:
    • sample a number of templates without replacement. The number of samples is fine-tuned by hand to avoid skew towards a specific intent (e.g. there are twice as many books as authors).
    • format the template(s) with the entity.
    • generate a pair 'intent - query' for each template.

The output of the previous steps is around 1 million pairs. Next, we do the following:

  • Prompt formatting: we format the dataset with the prompt template into ChatML format.
  • Train test split: sampled 40% and 10% for train and test respectively.

Personal and Sensitive Information

The dataset contains purely names from books and titles from the Gutenberg catalog, which are in the public domain. Hence, it doesn't contain any Personal Identifiable Information.

Bias, Risks, and Limitations

The dataset only contains information from books in the public domain. As a rule of thumb, this means that books from 90+ years from 2025 (estimated) will are not represented. A model tuned on this dataset may forget knowledge about more recent titles.

Also, it is tailored towards the Project Gutenberg domain. Hence, it may not generalize well for broader domains.

Acknowledgements

Project Gutenberg volunteers for maintaining the free catalogue; HuggingFace for the dataset hosting.

Downloads last month
17

Models trained or fine-tuned on empathyai/books-intent-dataset