The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: JSON parse error: Column() changed from object to string in row 0 Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables df = pandas_read_json(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json return pd.read_json(path_or_buf, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json return json_reader.read() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read obj = self._get_object_parser(self.data) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser obj = FrameParser(json, **kwargs).parse() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse self._parse() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1402, in _parse self.obj = DataFrame( File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/frame.py", line 778, in __init__ mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 503, in dict_to_mgr return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 114, in arrays_to_mgr index = _extract_index(arrays) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 680, in _extract_index raise ValueError( ValueError: Mixing dicts with non-Series may lead to ambiguous ordering. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow for key, pa_table in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables raise e File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Tokenized Ultra-FineWeb (100B English Tokens)
This repository provides a tokenized version of the English split of the openbmb/Ultra-FineWeb
dataset, prepared for large-scale language model training. The dataset consists of 100 billion high-quality tokens, processed with a custom tokenizer.
The data is sharded into 100 files, each containing exactly 1 billion tokens, making it easy to stream and use in distributed training setups.
Dataset Details
- Source Dataset: openbmb/Ultra-FineWeb
- Language: English
- Total Tokens: 100,000,000,000
- Data Format: Sharded NumPy arrays
- Shard Count: 100 files (
shard_0000.npy
toshard_0099.npy
) - Tokens per Shard: 1,000,000,000
- Data Type:
numpy.uint32
- Tokenizer: Custom BPE tokenizer (see
tokenizer.json
for details) - Vocabulary Size: 65,536
Usage
You can easily load any shard using NumPy or stream the entire dataset using the datasets
library.
Loading a Single Shard
To load a specific shard, use the following Python code:
import numpy as np
from huggingface_hub import hf_hub_download
# Download and load a specific shard
file_path = hf_hub_download(
repo_id="meryyllebr543/ultrafineweb-100B-tokens",
filename="shards/shard_0000.npy"
)
tokens = np.load(file_path)
print(f"Loaded shard with {len(tokens):,} tokens.")
# Expected output: Loaded shard with 1,000,000,000 tokens.
Streaming with the datasets
library
This repository is structured to be compatible with the datasets
library for streaming.
from datasets import load_dataset
# Stream the dataset (this is memory-efficient)
dataset = load_dataset("meryyllebr543/ultrafineweb-100B-tokens", streaming=True, split="train")
for item in dataset:
# Each 'item' will be a dictionary containing a batch of tokens
# The structure will depend on how the data is configured in your repo
print(item)
break
About the Original Ultra-FineWeb Dataset
This tokenized dataset is derived from Ultra-FineWeb, which is a large-scale, high-quality, and efficiently-filtered dataset. It was created by applying an efficient, verification-based filtering pipeline to the FineWeb dataset. Ultra-FineWeb serves as a core pre-training web dataset for the MiniCPM Series models.
For a complete understanding of the data filtering, verification, and evaluation, please refer to the official Ultra-FineWeb technical report.
License
The scripts used to generate this dataset are released under the Apache 2.0 license. The dataset itself is a derivative of openbmb/Ultra-FineWeb
, which is also licensed under Apache 2.0. Following the original authors' guidelines, users of this dataset should also be aware of the licenses of the underlying data sources used to create FineWeb.
Citation
If you use this dataset in your work, please be sure to cite the original authors of the Ultra-FineWeb dataset:
@misc{wang2025ultrafineweb,
title={{Ultra-FineWeb}: Efficient Data Filtering and Verification for High-Quality LLM Training Data},
author={Yudong Wang and Zixuan Fu and Jie Cai and Peijun Tang and Hongya Lyu and Yewei Fang and Zhi Zheng and Jie Zhou and Guoyang Zeng and Chaojun Xiao and Xu Han and Zhiyuan Liu},
year={2025},
eprint={2505.05427},
archivePrefix={arXiv},
primaryClass={cs.CL},
}
- Downloads last month
- 193