Institutional Books 1.0: A 242B token dataset from Harvard Library's collections, refined for accuracy and usability
Abstract
Institutional Books 1.0 is a large dataset of public domain books, totaling approximately 242 billion tokens, made available through OCR and metadata processing to facilitate accessibility and usage.
Large language models (LLMs) use data to learn about the world in order to produce meaningful correlations and predictions. As such, the nature, scale, quality, and diversity of the datasets used to train these models, or to support their work at inference time, have a direct impact on their quality. The rapid development and adoption of LLMs of varying quality has brought into focus the scarcity of publicly available, high-quality training data and revealed an urgent need to ground the stewardship of these datasets in sustainable practices with clear provenance chains. To that end, this technical report introduces Institutional Books 1.0, a large collection of public domain books originally digitized through Harvard Library's participation in the Google Books project, beginning in 2006. Working with Harvard Library, we extracted, analyzed, and processed these volumes into an extensively-documented dataset of historic texts. This analysis covers the entirety of Harvard Library's collection scanned as part of that project, originally spanning 1,075,899 volumes written in over 250 different languages for a total of approximately 250 billion tokens. As part of this initial release, the OCR-extracted text (original and post-processed) as well as the metadata (bibliographic, source, and generated) of the 983,004 volumes, or 242B tokens, identified as being in the public domain have been made available. This report describes this project's goals and methods as well as the results of the analyses we performed, all in service of making this historical collection more accessible and easier for humans and machines alike to filter, read and use.
Community
To that end, this technical report introduces Institutional Books 1.0, a large collection of public domain books originally digitized through Harvard Library's participation in the Google Books project, beginning in 2006.
What a great project! I hope other libraries will follow!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Common Corpus: The Largest Collection of Ethical Data for LLM Pre-Training (2025)
- The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text (2025)
- Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models (2025)
- Position: The Most Expensive Part of an LLM should be its Training Data (2025)
- MOLE: Metadata Extraction and Validation in Scientific Papers Using LLMs (2025)
- Developing a Mixed-Methods Pipeline for Community-Oriented Digitization of Kwak'wala Legacy Texts (2025)
- Synthetic Document Question Answering in Hungarian (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper