The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
PubMed Dataset Loader (Configurable 2014-2025) - Full Abstract Parsing
Overview
This repository provides a modified Hugging Face datasets
loading script for MEDLINE/PubMed data. It is designed to download and parse PubMed baseline XML files, with specific enhancements for extracting the complete text from structured abstracts.
This script is based on the original NCBI PubMed dataset loader from Hugging Face and includes modifications by Hoang Ha (LIG) and abstract parsing enhancements was adapted for the NanoBubble Project contributed by Tiziri Terkmani (Research Engineer, LIG, Team SIGMA).
Key Features
- Parses PubMed baseline XML files (
.xml.gz
). - Full Abstract Extraction: Correctly handles structured abstracts (e.g., BACKGROUND, METHODS, RESULTS) and extracts the complete text, unlike some previous parsers that might truncate.
- Configurable Date Range: Intended for use with data from 2015 to 2025, but requires manual configuration of the download URLs within the script (
pubmed_fulltext_dataset.py
). - Generates data compatible with the Hugging Face
datasets
library schema.
Dataset Information
- Intended Time Period: 2015 - 2025 (Requires user configuration)
- Data Source: U.S. National Library of Medicine (NLM) FTP server.
- License: Apache 2.0 (for the script), NLM terms apply to the data itself.
- Size: Variable depending on configured download range. The full 2015-2025 range contains 14 millions of abstracts.
!! Important Caution !!
The Python script (pubmed_fulltext_dataset.py
) requires manual modification to download the desired data range (e.g., 2015-2025). The default configuration only downloads a small sample of files from the 2025 baseline for demonstration purposes.
You MUST edit the _URLs
list in the script to include the paths to ALL the .xml.gz
files for each year you want to include.
How to Configure URLs:
- Go to the NLM FTP baseline directory: ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/
- For each year (e.g., 2015, 2016, ..., 2025), identify all the
pubmedYYnXXXX.xml.gz
files. The number of files (XXXX
) varies per year. Checksum files (e.g.,pubmed24n.xml.gz.md5
) often list all files for that year. - Construct the full URL for each file (e.g.,
https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmedYYnXXXX.xml.gz
). - Add all these URLs to the
_URLs
list in the script. See the comments within the script for examples.
Use this loader with caution and always verify the scope of the data you have actually downloaded and processed.
Usage
To use this script to load the data (after configuring the _URLs
list):
from datasets import load_dataset
dataset = load_dataset("HoangHa/pubmed25_debug", split="train", trust_remote_code=True, cache_dir=".")
print(dataset)
print(dataset['train'][0]['MedlineCitation']['Article']['Abstract']['AbstractText'])
- Downloads last month
- 112