Update README.md
Browse files
README.md
CHANGED
|
@@ -56,7 +56,7 @@ This foundational model is not yet adapted to follow instructions or aligned wit
|
|
| 56 |
|
| 57 |
## Model training details
|
| 58 |
We train TildeLM using the [Tilde's branch](https://github.com/tilde-nlp/llm-gpt-neox) of [EleutherAI's](https://www.eleuther.ai/) open-source GPT-NeoX framework on LUMI supercomputer's 768 AMD MI250X GPUs. The foundational model training involves 450,000 updates with a constant batch size of 4,718,592 tokens, using a constant learning rate followed by a cooldown phase across 2 trillion tokens. Training consists of three distinct data sampling phases. First, all languages are sampled uniformly to ensure equal representation. Second, languages are sampled according to their natural distribution to ensure that the model sees as much data from languages with larger speaker bases as possible. Finally, we return to uniform sampling across all languages. This three-phase approach ensures TildeLM develops balanced multilingual capabilities while maintaining strong performance across all target languages, particularly the underrepresented European languages.
|
| 59 |
-
Model
|
| 60 |
| Parameter | Value |
|
| 61 |
|-----------|-------|
|
| 62 |
| Sequence Length | 8192 |
|
|
|
|
| 56 |
|
| 57 |
## Model training details
|
| 58 |
We train TildeLM using the [Tilde's branch](https://github.com/tilde-nlp/llm-gpt-neox) of [EleutherAI's](https://www.eleuther.ai/) open-source GPT-NeoX framework on LUMI supercomputer's 768 AMD MI250X GPUs. The foundational model training involves 450,000 updates with a constant batch size of 4,718,592 tokens, using a constant learning rate followed by a cooldown phase across 2 trillion tokens. Training consists of three distinct data sampling phases. First, all languages are sampled uniformly to ensure equal representation. Second, languages are sampled according to their natural distribution to ensure that the model sees as much data from languages with larger speaker bases as possible. Finally, we return to uniform sampling across all languages. This three-phase approach ensures TildeLM develops balanced multilingual capabilities while maintaining strong performance across all target languages, particularly the underrepresented European languages.
|
| 59 |
+
## Model Hyper-Parameters
|
| 60 |
| Parameter | Value |
|
| 61 |
|-----------|-------|
|
| 62 |
| Sequence Length | 8192 |
|