Apertus
Table of Contents
Model Summary
Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models. The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors.
The model is a decoder-only transformer, pretrained on 15T tokens with a staged curriculum of web, code and math data. The model uses a new xIELU activation function and is trained from scratch with the AdEMAMix optimizer. Post-training included supervised fine-tuning and alignment via QRPO.
Key features
- Fully open model: open weights + open data + full training details including all data and training recipes
- Massively Multilingual: 1811 natively supported languages
- Compliant Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data
For more details refer to our technical report
How to use
The modeling code for Apertus is available in transformers v4.56.0
and later, so make sure to upgrade your transformers version. You can also load the model with the latest vLLM
which uses transformers as a backend.
pip install -U transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "swiss-ai/Apertus-70B-Instruct-2509"
device = "cuda" # for GPU usage or "cpu" for CPU usage
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
).to(device)
# prepare the model input
prompt = "Give me a brief explanation of gravity in simple terms."
messages_think = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages_think,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt", add_special_tokens=False).to(model.device)
# Generate the output
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
# Get and decode the output
output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
We recommend setting
temperature=0.8
andtop_p=0.9
in the sampling parameters.
Long context processing
Apertus by default supports a context length up to 65,536 tokens.
Agentic Usage
Apertus supports tool use
Deployment
Deployment of the models is directly supported by the newest versions of Transformers, vLLM, SGLang, and also for running on-device with MLX,
Evaluation
Pretraining Evaluation: Performance (%) of Apertus models on general language understanding tasks (higher is better) compared to other pretrained models.
Model | Avg | ARC | HellaSwag | WinoGrande | XNLI | XCOPA | PIQA |
---|---|---|---|---|---|---|---|
Fully Open Models | |||||||
Apertus-8B | 65.8 | 72.7 | 59.8 | 70.6 | 45.2 | 66.5 | 79.8 |
Apertus-70B | 67.5 | 70.6 | 64.0 | 73.3 | 45.3 | 69.8 | 81.9 |
OLMo2-7B | 64.0 | 72.9 | 60.4 | 74.5 | 40.4 | 55.2 | 80.9 |
OLMo2-32B | 67.7 | 76.2 | 66.7 | 78.6 | 42.9 | 60.1 | 82.1 |
EuroLLM-1.7B | 54.8 | 57.2 | 44.9 | 58.1 | 40.7 | 55.7 | 72.4 |
EuroLLM-9B | 62.8 | 67.9 | 57.9 | 68.8 | 41.5 | 61.1 | 79.6 |
SmolLM2-1.7B | 58.5 | 66.1 | 52.4 | 65.6 | 37.6 | 52.3 | 77.0 |
SmolLM3-3B | 61.6 | 68.6 | 56.4 | 68.1 | 40.5 | 58.2 | 77.7 |
Poro-34B | 61.7 | 65.7 | 57.9 | 70.6 | 41.6 | 56.0 | 78.5 |
Open-Weight Models | |||||||
Llama3.1-8B | 65.4 | 71.6 | 60.0 | 73.4 | 45.3 | 61.8 | 80.1 |
Llama3.1-70B | 67.3 | 74.4 | 56.5 | 79.4 | 44.3 | 66.7 | 82.3 |
Qwen2.5-7B | 64.4 | 69.6 | 60.1 | 72.8 | 43.3 | 61.7 | 78.7 |
Qwen2.5-72B | 69.8 | 76.2 | 67.5 | 78.0 | 46.9 | 68.2 | 82.0 |
Qwen3-32B | 67.8 | 75.6 | 64.0 | 73.8 | 44.4 | 67.9 | 80.9 |
Llama4-Scout-16x17B | 67.9 | 74.7 | 66.8 | 73.2 | 43.5 | 67.7 | 81.2 |
GPT-OSS-20B | 58.1 | 67.0 | 41.5 | 66.5 | 37.4 | 60.4 | 75.6 |
Many additional benchmark evaluations, for pretraining and posttraining phases, multilingual evaluations in around hundred languages, and long context evaluations are provided in Section 5 of the Apertus_Tech_Report.pdf
Training
Model
- Architecture: Transformer decoder
- Pretraining tokens: 15T
- Precision: bfloat16
Software & hardware
- GPUs: 4096 GH200
- Training Framework: Megatron-LM
- ...
Open resources
All elements used in the training process are made openly available
- Training data reconstruction scripts: github.com/swiss-ai/pretrain-data
- The training intermediate checkpoints are available on the different branches of this same repository
Limitations
Apertus can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
Legal Aspects
EU AI Act Transparency Documentation and Code of Practice
Data Protection and Copyright Requests
For removal requests of personally identifiable information (PII) or of copyrighted content, please contact the respective dataset owners or us directly
Output Filter for PII
- Currently no output filter is provided.
- Please check this site regularly for an output filter that can be used on top of the Apertus LLM. The filter reflects data protection deletion requests which have been addressed to us as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from this site every six months.
Contact
To contact us, please send an email to [email protected]
Citation
@misc{swissai2025apertus,
title={{Apertus: Democratizing Open and Compliant LLMs for Global Language Environments}},
author={Alejandro HernΓ‘ndez-Cano and Alexander HΓ€gele and Allen Hao Huang and Angelika Romanou and Antoni-Joan Solergibert and Barna Pasztor and Bettina Messmer and Dhia Garbaya and Eduard Frank Δurech and Ido Hakimi and Juan GarcΓa Giraldo and Mete Ismayilzada and Negar Foroutan and Skander Moalla and Tiancheng Chen and Vinko SabolΔec and Yixuan Xu and Michael Aerni and Badr AlKhamissi and Ines Altemir Marinas and Mohammad Hossein Amani and Matin Ansaripour and Ilia Badanin and Harold Benoit and Emanuela Boros and Nicholas Browning and Fabian BΓΆsch and Maximilian BΓΆther and Niklas Canova and Camille Challier and Clement Charmillot and Jonathan Coles and Jan Deriu and Arnout Devos and Lukas Drescher and Daniil Dzenhaliou and Maud Ehrmann and Dongyang Fan and Simin Fan and Silin Gao and Miguel Gila and MarΓa Grandury and Diba Hashemi and Alexander Hoyle and Jiaming Jiang and Mark Klein and Andrei Kucharavy and Anastasiia Kucherenko and Frederike LΓΌbeck and Roman Machacek and Theofilos Manitaras and Andreas Marfurt and Kyle Matoba and Simon Matrenok and Henrique MendoncΓ§a and Fawzi Roberto Mohamed and Syrielle Montariol and Luca Mouchel and Sven Najem-Meyer and Jingwei Ni and Gennaro Oliva and Matteo Pagliardini and Elia Palme and Andrei Panferov and LΓ©o Paoletti and Marco Passerini and Ivan Pavlov and Auguste Poiroux and Kaustubh Ponkshe and Nathan Ranchin and Javi Rando and Mathieu Sauser and Jakhongir Saydaliev and Muhammad Ali Sayfiddinov and Marian Schneider and Stefano Schuppli and Marco Scialanga and Andrei Semenov and Kumar Shridhar and Raghav Singhal and Anna Sotnikova and Alexander Sternfeld and Ayush Kumar Tarun and Paul Teiletche and Jannis Vamvas and Xiaozhe Yao and Hao Zhao Alexander Ilic and Ana Klimovic and Andreas Krause and Caglar Gulcehre and David Rosenthal and Elliott Ash and Florian TramΓ¨r and Joost VandeVondele and Livio Veraldi and Martin Rajman and Thomas Schulthess and Torsten Hoefler and Antoine Bosselut and Martin Jaggi and Imanol Schlag},
year={2025},
howpublished={\url{https://arxiv.org/abs/2509.14233}}
}
- Downloads last month
- 185,498
Model tree for swiss-ai/Apertus-70B-Instruct-2509
Base model
swiss-ai/Apertus-70B-2509