Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

image/png

Dataset: LLaDA-Sample-10BT
Base: HuggingFaceFW/fineweb (subset sample-10BT)
Purpose: Training LLaDA (Large Language Diffusion Models)

Preprocessing

  • Tokenizer: GSAI-ML/LLaDA-8B-Instruct
  • Chunking: Up to 4,096 tokens per chunk (1% of chunks randomly sized between 1–4,096 tokens)
  • Noisy masking: Applied with noise factor ε = 1×10⁻³
  • Fields per chunk (PyTorch tensors):
    • input_ids
    • noisy_input_ids
    • mask
    • t (time scalar)

Statistics

  • Total chunks: ~2,520,000
  • Shards: 252 .pt files
  • Chunks per file: 10,000
  • Average file size: ~702–708 MB
  • Total size: ~166 GB

Usage

This dataset is used for training in the LLaDA-from-scratch GitHub repository, where you’ll find the full data pipeline and training scripts.

Downloads last month
251

Models trained or fine-tuned on Fredtt3/LLaDA-Sample-10BT