Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
ZINC_22 / README.md
kmchiti's picture
Update README.md
f13c8b4 verified
|
raw
history blame
3.5 kB
metadata
dataset_info:
  features:
    - name: SMILES
      dtype: string
    - name: Deep SMILES
      dtype: string
    - name: SELFIES
      dtype: string
    - name: SAFE
      dtype: string
  splits:
    - name: train
      num_bytes: 605949364569
      num_examples: 1485280171
    - name: valid
      num_bytes: 1248532124
      num_examples: 2999216
    - name: test
      num_bytes: 1264493396
      num_examples: 2999132
  download_size: 241151459346
  dataset_size: 608462390089
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: valid
        path: data/valid-*
      - split: test
        path: data/test-*
size_categories:
  - 1B<n<10B

ZINC_22 Pretraining Dataset

Dataset Description

This dataset is derived from the ZINC-22 database (~70B synthesizable compounds as of Sept 2024) and was prepared for large-scale pretraining of molecular language models. We randomly sampled 1.5 billion molecules using a stratified heavy-atom count split (4–49 atoms) to ensure coverage of diverse chemical sizes.
All molecules were deduplicated to remove repeats, canonicalized in SMILES format, and converted into multiple string representations: SMILES, SELFIES, SAFE, DeepSMILES.


Precomputed Statistics

This repository includes precomputed reference statistics (*_stats.pkl) for evaluating generated molecules against validation and test sets.
These statistics are used to compute the following metrics:

  • FCD – Fréchet ChemNet Distance
  • SNN – Similarity to Nearest Neighbor
  • Frag – Fragment similarity (BRICS decomposition)
  • Scaf – Scaffold similarity (Bemis–Murcko scaffolds)

File Naming Convention

Files are provided for multiple reference set sizes:

  • _175k → 175,000 molecules
  • _500k → 500,000 molecules
  • _1M → 1 million molecules
  • _3M → 3 million molecules
  • (no suffix) → full set

By convention:

  • valid_stats_* → computed from the random validation split
  • test_stats_* → computed from the scaffold-based split

These statistics enable consistent and reproducible evaluation across experiments.


How to Use

Before running the example below, make sure you have these packages installed:

pip install rdkit fcd-torch

Example: Download stats from the Hub and compute FCD

from huggingface_hub import hf_hub_download
import pickle
from fcd_torch import FCD as FCDMetric

# 1. Download the precomputed stats file from Hugging Face Hub
stats_path = hf_hub_download(
    repo_id="chandar-lab/ZINC_22",
    repo_type="dataset",
    filename="valid_stats_175k.pkl"  # change to desired file
)

# 2. Load the reference stats
with open(stats_path, "rb") as f:
    reference_stats = pickle.load(f)

# 3. Compute FCD for your generated molecules
generated_smiles = ["CCO", "CCN", "CCCN", "CCCN"]  # replace with your generated set
fcd_calculator = FCDMetric(batch_size=4)

fcd_value = fcd_calculator(gen=generated_smiles, pref=reference_stats["FCD"])
print(f"FCD score: {fcd_value:.4f}")

Citation

@misc{chitsaz2025novomolgenrethinkingmolecularlanguage,
      title={NovoMolGen: Rethinking Molecular Language Model Pretraining}, 
      author={Kamran Chitsaz and Roshan Balaji and Quentin Fournier and Nirav Pravinbhai Bhatt and Sarath Chandar},
      year={2025},
      eprint={2508.13408},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2508.13408}, 
}