Text Generation
Transformers
Safetensors
llama
text-generation-inference
TildeSIA commited on
Commit
47b5416
·
verified ·
1 Parent(s): 840f1f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -49,13 +49,13 @@ datasets:
49
  **License:** CC-BY-4.0
50
 
51
  ## Mission statement
52
- TildeLM is an open-source foundational language model built to serve underrepresented Nordic and Eastern European languages. Developed with European Commission funding and trained on the LUMI supercomputer, this 30B parameter model addresses the performance gaps that speakers of 19 focus languages—representing over 165 million people—face with existing AI systems.
53
- The model employs an equitable tokenizer and curriculum-learning approach to ensure fair representation across lower-resource languages, moving beyond the typical English-centric design of most language models. As an open-source project, TildeLM enables transparent research and community-driven development while maintaining European technological independence.
54
- This foundational model is not yet adapted to follow instructions or aligned with safety features. The next version being built on top of this model will be a specialized translation model, leveraging TildeLM's multilingual foundation to provide high-quality translation capabilities across the supported European language pairs.
55
 
56
 
57
  ## Model training details
58
- We train TildeLM using the [Tilde's branch](https://github.com/tilde-nlp/llm-gpt-neox) of [EleutherAI's](https://www.eleuther.ai/) open-source GPT-NeoX framework on LUMI supercomputer's 768 AMD MI250X GPUs. The foundational model training involves 450,000 updates with a constant batch size of 4,718,592 tokens, using a constant learning rate followed by a cooldown phase across 2 trillion tokens. Training consists of three distinct data sampling phases. First, all languages are sampled uniformly to ensure equal representation. Second, languages are sampled according to their natural distribution to ensure that the model sees as much data from languages with larger speaker bases as possible. Finally, we return to uniform sampling across all languages. This three-phase approach ensures TildeLM develops balanced multilingual capabilities while maintaining strong performance across all target languages, particularly the underrepresented European languages.
59
  ## Model Hyper-Parameters
60
  | Parameter | Value |
61
  |-----------|-------|
@@ -73,4 +73,4 @@ We train TildeLM using the [Tilde's branch](https://github.com/tilde-nlp/llm-gpt
73
  | Non-embedding Parameters | 2.91E+10 |
74
  | Total Parameters | 3.07E+10 |
75
  ## Tokenizer details
76
- We built the TildeLM tokeniser to ensure equitable language representation across languages. Technically, we trained the tokeniser to represent the same text regardless of the language it is written in, using a similar number of tokens. In practice, TildeLM will be more efficient and faster than other models for our focus languages, as writing out answers will require fewer steps. For more details on how TildeLM compares against other models, see **[TILDE Bench](https://tilde-nlp.github.io/tokenizer-bench.html)**!
 
49
  **License:** CC-BY-4.0
50
 
51
  ## Mission statement
52
+ TildeOpen is an open-source foundational language model built to serve underrepresented Nordic and Eastern European languages. Developed with European Commission funding and trained on the LUMI supercomputer, this 30B parameter model addresses the performance gaps that speakers of 19 focus languages—representing over 165 million people—face with existing AI systems.
53
+ The model employs an equitable tokenizer and curriculum-learning approach to ensure fair representation across lower-resource languages, moving beyond the typical English-centric design of most language models. As an open-source project, TildeOpen enables transparent research and community-driven development while maintaining European technological independence.
54
+ This foundational model is not yet adapted to follow instructions or aligned with safety features. The next version being built on top of this model will be a specialized translation model, leveraging TildeOpen's multilingual foundation to provide high-quality translation capabilities across the supported European language pairs.
55
 
56
 
57
  ## Model training details
58
+ We train TildeOpen using the [Tilde's branch](https://github.com/tilde-nlp/llm-gpt-neox) of [EleutherAI's](https://www.eleuther.ai/) open-source GPT-NeoX framework on LUMI supercomputer's 768 AMD MI250X GPUs. The foundational model training involves 450,000 updates with a constant batch size of 4,718,592 tokens, using a constant learning rate followed by a cooldown phase across 2 trillion tokens. Training consists of three distinct data sampling phases. First, all languages are sampled uniformly to ensure equal representation. Second, languages are sampled according to their natural distribution to ensure that the model sees as much data from languages with larger speaker bases as possible. Finally, we return to uniform sampling across all languages. This three-phase approach ensures TildeOpen develops balanced multilingual capabilities while maintaining strong performance across all target languages, particularly the underrepresented European languages.
59
  ## Model Hyper-Parameters
60
  | Parameter | Value |
61
  |-----------|-------|
 
73
  | Non-embedding Parameters | 2.91E+10 |
74
  | Total Parameters | 3.07E+10 |
75
  ## Tokenizer details
76
+ We built the TildeOpen tokeniser to ensure equitable language representation across languages. Technically, we trained the tokeniser to represent the same text regardless of the language it is written in, using a similar number of tokens. In practice, TildeOpen will be more efficient and faster than other models for our focus languages, as writing out answers will require fewer steps. For more details on how TildeOpen compares against other models, see **[TILDE Bench](https://tilde-nlp.github.io/tokenizer-bench.html)**!