The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ReadTimeout Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: b9794419-e1de-475b-aeaa-29d81d6c564f)') Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 165, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1663, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1620, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1018, in get_module data_files = DataFilesDict.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 690, in from_patterns else DataFilesList.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 593, in from_patterns origin_metadata = _get_origin_metadata(data_files, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 507, in _get_origin_metadata return thread_map( File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1169, in __iter__ for obj in iterable: File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 609, in result_iterator yield fs.pop().result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 446, in result return self.__get_result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 486, in _get_single_origin_metadata resolved_path = fs.resolve_path(data_file) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist self._api.repo_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2704, in repo_info return method( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2561, in dataset_info r = get_session().get(path, headers=headers, timeout=timeout, params=params) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get return self.request("GET", url, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send return super().send(request, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: b9794419-e1de-475b-aeaa-29d81d6c564f)')
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
The Pretokenized Dolma Dataset
A pre-tokenized, pre-shuffled version of Dolma, the high-quality text corpus from AI2. This dataset is designed to be plug-and-play with the pico-train library.
Overview
Key Features:
- Tokenized with allenai/OLMo-7B-0724-hf, a BPE-tokenized with a vocabulary size of 50280
- Sequence length: 2049 tokens (2048 + 1 for next-token prediction)
- Sharded into 10,000 Parquet files (~78MB each)
- 420B tokens total size (perfect for training a model for 200K steps at batch size 1024)
- Ready for streaming via datasets.load_dataset(..., streaming=True)
- Pre-shuffling ensures that the order in which data is shown to models is consistent across training runs
How it was built
We first downloaded the full Dolma corpus and selected a random 30% subset for preprocessing. Using the OLMo tokenizer, the text was tokenized and chunked into sequences of 2049 tokens. Each document is separated by an end-of-sequence () token.
After tokenization, we shuffled and evenly sampled from the token stream to create 100 uniform shards. These were then further divided into 10,000 smaller shards to support fast loading and parallel training. Only full-length sequences are retained to ensure consistency across samples.
The dataset is stored as Parquet files, each containing token sequences under the key input_ids.
We release the exact scripts we use to create this dataset in our pico-lm/pico-dataset GitHub repo.
Usage
from datasets import load_dataset
dataset = load_dataset("pico-lm/pretokenized-dolma", streaming=True)
- Downloads last month
- 5,553