Datasets:
Tasks:
Text2Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
License:
File size: 1,498 Bytes
83e6d22 a3ce075 f976730 bd9b050 f976730 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
- name: input_ids
sequence: int32
splits:
- name: val
num_bytes: 481445317
num_examples: 29016
download_size: 240052852
dataset_size: 481445317
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
license: apache-2.0
task_categories:
- text2text-generation
language:
- en
pretty_name: Pretokenized Paloma Dataset
size_categories:
- 10M<n<100M
---
# The Pretokenized Paloma Benchmark Dataset
This dataset is a compact, pre-tokenized evaluation dataset designed to complement the [pretokenized-dolma](https://huggingface.co/datasets/pico-lm/pretokenized-dolma) training set. Built from the [Paloma corpus](https://github.com/allenai/OLMo-Eval/blob/main/paloma/README.md) (Allen Institute), this benchmark was designed to not contain any data overlap with Dolma and is ideal for evaluating models trained on it.
### Overview
Features:
- Pre-tokenized with the same tokenizer as pretokenized-dolma: [allenai/OLMo-7B-0724-hf](https://huggingface.co/allenai/OLMo-7B-0724-hf)
- Sequence length: 2048 tokens
- Ideal for perplexity calculations for models trained on pretokenized-dolma
We release the exact scripts we use to create this dataset in our [pico-lm/pico-dataset](https://github.com/pico-lm/pico-dataset) GitHub repo.
### Usage
```
from datasets import load_dataset
dataset = load_dataset("pico-lm/pretokenized-paloma", streaming=True)
``` |