Nous Consilience 40B is a generative text model, pretrained from scratch in a decentralized fashion over the internet.
This model is automatically updated every 500 training steps, with the latest checkpoint uploaded here from the ongoing pretraining dashboard.
For more information, read the blog post.
Model Details
Model Type: Decoder-only transformer
Parameters: 40 billion
Architecture: DeepSeek v3 + MLA (Dense version without MoE routers)
Pretraining Data: 20T tokens, Merge of FineWeb, FineWeb 2 and The Stack v2
Training Duration: TBD
Optimizer: DisTrO, decentralized version
Pretaining Dataset
For training data, we combined FineWeb (14T), FineWeb-2 with some less common languages removed (4T), and The Stack V2 (~.2T, upsampled to 1T tokens). We chose these datasets over more specialized pre-training datasets that aim to purely increase benchmark performance. Our goal with Consilience is to make a true "base" model -- one representative of the entirety of the creative output of humanity, and not merely trying to win the benchmaxxing game.
Additionally, we're training this model continuously without a final data "annealing" step. While annealing helps base models respond more accurately to benchmarks and improves usability, it may potentially constrain creativity and interesting behaviors. Our solution is to simply release both versions: the raw, un-annealed base model first, followed by an annealed version to aid in usability.
License
As the model representing the vast and diverse creative output of humankind, we choose to release it under a dual license: CC0 by default -- to dedicate it to the public domain -- while also allowing it to be used under the MIT license for users who require permissive terms with attribution and warranty disclaimers.
- Downloads last month
- 662