UW
/

Text Generation
Transformers
Safetensors
English
olmo2
Inference Endpoints

SuperBPE

This 11B model was trained from scratch with a SuperBPE tokenizer. SuperBPE extends the BPE algorithm to include both traditional subword tokens (contained within word boundaries), as well as new superword tokens (containing parts of multiple words)! It matches the 8B BPE model in both train and inference FLOPs.

The model was trained with a scaled-up version of the Olmo2 7B architecture and the Olmo2 7B pretraining data. It has a context length of 3,000 tokens (to match the effective context size in bytes of a BPE model with a context length of 4,096 tokens), and is trained on 238B tokens. The tokenizer has a vocabulary size of 200k and transitions from learning subword to learning superword tokens at vocabulary size of 180k.

Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("UW/OLMo2-11B-SuperBPE-t180k")
model = AutoModelForCausalLM.from_pretrained("UW/OLMo2-11B-SuperBPE-t180k")

tokenizer.convert_ids_to_tokens(tokenizer.encode("By the way, I am a fan of the Milky Way."))
# ['ByĠtheĠway', ',ĠIĠam', 'Ġa', 'Ġfan', 'ĠofĠthe', 'ĠMilkyĠWay', '.']

Citation

@misc{liu-etal-2025-superbpe,
  title={SuperBPE: Space Travel for Language Models}, 
  author={Alisa Liu and Jonathan Hayase and Valentin Hofmann and Sewoong Oh and Noah A. Smith and Yejin Choi},
  year={2025},
  eprint={2503.13423},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2503.13423}, 
}
Downloads last month
7
Safetensors
Model size
11.3B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train UW/OLMo2-11B-SuperBPE-t180k

Space using UW/OLMo2-11B-SuperBPE-t180k 1

Collection including UW/OLMo2-11B-SuperBPE-t180k