govdocs1-pdf-source / README.md
pszemraj's picture
convert sample to hf Document dataset
a8e6ed3 verified
metadata
license: cc-by-4.0
task_categories:
  - image-text-to-text
  - image-feature-extraction
language:
  - en
tags:
  - pdf
  - ocr
  - legal
  - government
size_categories:
  - 100K<n<1M
dataset_info:
  - config_name: index
    features:
      - name: filename
        dtype: string
      - name: filepath
        dtype: string
      - name: broken_pdf
        dtype: bool
      - name: num_pages
        dtype: float64
      - name: created_date
        dtype: string
      - name: modified_date
        dtype: string
      - name: title
        dtype: string
      - name: author
        dtype: string
      - name: subject
        dtype: string
      - name: file_size_mb
        dtype: float64
      - name: error_message
        dtype: string
    splits:
      - name: train
        num_bytes: 39695484
        num_examples: 229917
    download_size: 19387703
    dataset_size: 39695484
  - config_name: sample
    features:
      - name: pdf
        dtype: pdf
      - name: num_pages
        dtype: float64
      - name: created_date
        dtype: string
      - name: modified_date
        dtype: string
      - name: title
        dtype: string
      - name: author
        dtype: string
      - name: subject
        dtype: string
      - name: file_size_mb
        dtype: float64
      - name: broken_pdf
        dtype: bool
      - name: error_message
        dtype: string
    splits:
      - name: train
        num_bytes: 879832
        num_examples: 5000
    download_size: 400528
    dataset_size: 879832
configs:
  - config_name: index
    data_files:
      - split: train
        path: index/train-*
  - config_name: sample
    data_files:
      - split: train
        path: sample/train-*

govdocs1: source PDF files

This is ~220,000 open-access PDF documents (about 6.6M pages) from the dataset govdocs1. It wants to be OCR'd.

  • Uploaded as tar file pieces of ~10 GiB each due to size/file count limits with an index.csv covering details
  • 5,000 randomly sampled PDFs are available unarchived in sample/. Hugging Face supports previewing these in-browser, for example this one

Recovering the data

Download the data/ directory (with huggingface-cli download or similar) extract the tar pieces:

cat data_pdfs_part.tar.* | tar -xf - && rm data_pdfs_part.tar.*

processing details

duplicates

exact duplicate PDFs were removed with jdupes. See the log file for details.


By the numbers

Based on the index.csv

Dataset Overview

Metric Value Percentage
Total Documents 229,917 100%
Successfully Processed 229,824 99.96%
Broken/Corrupted 93 0.04%
Unique Filenames 229,917 100%

Document Structure

Page Count Distribution

Pages Count Percentage
2 pages 21,887 9.5%
1 page 19,282 8.4%
4 pages 14,640 6.4%
3 pages 12,861 5.6%
6 pages 9,770 4.3%
Statistic Value
Range 1 - 3,200 pages
Mean 27.8 pages
Median 10 pages
Standard Deviation 67.9 pages

File Size Distribution

Size (MB) Count Percentage
0.02 13,427 5.8%
0.03 12,142 5.3%
0.04 12,085 5.3%
0.05 11,850 5.2%
0.01 9,929 4.3%
Statistic Value
Range 0 - 68.83 MB
Mean 0.565 MB
Median 0.15 MB
Standard Deviation 1.134 MB

Metadata Completeness Crisis

Field Missing Present Completeness
Subject 182,430 47,487 20.6%
Author 78,269 151,648 66.0%
Title 51,514 178,403 77.6%
Created Date 3,260 226,657 98.6%

Title Quality Breakdown

Title Type Count Percentage
Missing (None) 51,514 22.4%
Generic "Document" 11,699 5.1%
"untitled" 2,081 0.9%
Meaningful titles ~165,000 71.6%

Top Authors

Author Count
U.S. Government Printing Office 11,838
Unknown 3,477
Administrator 1,630
U.S. Government Accountability Office 1,390

Top Subjects

Subject Count
Extracted Pages 11,692
NIOSH HHE REPORT 466
CMS Opinion Template 353
SEC Financial Proposals Summary 230

Processing Errors

Error Type Count Percentage
Could not read Boolean object 46 49.5%
cryptography>=3.1 required for AES 15 16.1%
Stream ended unexpectedly 9 9.7%
'NullObject' has no attribute 'get' 5 5.4%
Other errors 18 19.4%

Temporal Coverage

Date Field Range Issues
Modified Date 1979-12-31 to 2025-03-31 (dates in 2023-2025 are incorrect/defaulted to)
Created Date Various formats 1,573 invalid "D:00000101000000Z"

Critical Assessment

Generated by Claude Sonnet-4, unsolicited (as always)

Data Quality Issues

Issue Severity Impact
Metadata Poverty CRITICAL 79% missing subjects kills discoverability
Title Degradation HIGH 28% generic/missing titles
Date Inconsistencies MEDIUM Invalid formats, future dates
Processing Errors LOW 0.04% failure rate acceptable

Key Insights

Document Profile: Typical government PDF = 10 pages, 0.15 MB, metadata-poor

Fatal Flaw: This dataset has excellent technical extraction (99.96% success) but catastrophic intellectual organization. You're essentially working with 230K unlabeled documents.

Bottom Line: The structural data is solid, but without subject classification for 79% of documents, this is an unindexed digital landfill masquerading as an archive.