cjvt
/

dvres commited on
Commit
369685d
·
verified ·
1 Parent(s): c0030c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -6
README.md CHANGED
@@ -133,25 +133,45 @@ Remarks:
133
  - The following corpora was excluded from MetaFida: dgt15_sl, classlawiki_sl, tweet_sl, janes_tweet, janes_forum, janes_news
134
  - Serbian Wikipedia was converted from Cyrillic to Latin
135
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
  ## Evaluation
137
 
138
- The models were evaluated using [Slovene SuperGLUE](https://slobench.cjvt.si/leaderboard/view/3) collection of classification tasks on [SloBench](https://slobench.cjvt.si). Instruct version of the model was also evaluated on translation [from English to Slovene](https://slobench.cjvt.si/leaderboard/view/8) and [from Slovene to English](https://slobench.cjvt.si/leaderboard/view/7) Additionally, we evaluated our models on [Slovenian-LLM-Eval](https://huggingface.co/datasets/cjvt/slovenian-llm-eval).
139
 
140
  Code for evaluation:
141
  - [SloBench tasks](https://github.com/SloLama/slobench_evaluation)
142
  - [Slovenian-LLM-Eval](https://github.com/SloLama/slovenian-llm-eval)
143
 
144
- ## Slovenian-LLM-Eval results
145
 
146
  Comparison between GaMS models, base Gemma 2 models and SlovenianGPT (open source model for Slovene based on Mistral 7B) is shown in the figure below. All models were evaluated in 0-shot scenario.
147
 
148
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/652d40a78fa1fbb0aae165bb/tDyAjB2dgYXv1dLpFHikd.png)
149
 
150
- ## Slobench Results
151
 
152
  GaMS 2B, 9B and 27B models were evaluated in 3-shot scenario, except for MultiRC and translation tasks, where 0-shot was used. GaMS-9B-Instruct was evaluated in 0-shot scenarion on all tasks. We used guided decoding to ensure the correct format of the responses.
153
 
154
- ### Slovene SuperGLUE
155
 
156
  | Rank | Title | Average | BoolQ Accuracy | CB Accuracy | CB F1 Score | CB Average | COPA Accuracy | MultiRC EM | MultiRC F1a Score | MultiRC Average | RTE Accuracy | WSC Accuracy |
157
  |------|------------------------|---------|---------------|-------------|-------------|------------|--------------|------------|----------------|----------------|-------------|-------------|
@@ -169,7 +189,7 @@ GaMS 2B, 9B and 27B models were evaluated in 3-shot scenario, except for MultiRC
169
  | 15 | GaMS-1B-Chat | 0.4570 | 0.8000 | 0.4880 | 0.3023 | 0.3951 | 0.4840 | 0.1081 | 0.2428 | 0.1755 | 0.5172 | 0.3692 |
170
 
171
 
172
- ### English to Slovene translation (first 11 models on the benchmark)
173
 
174
  | Rank | Title | BERT score | BLEU (avg) | METEOR (avg) | CHRF (avg) | BLEU (corpus) | CHRF (corpus) |
175
  |------|---------------------------------|------------|------------|--------------|------------|---------------|---------------|
@@ -185,7 +205,7 @@ GaMS 2B, 9B and 27B models were evaluated in 3-shot scenario, except for MultiRC
185
  | 9 | META LLAMA 3.1 405B | 0.8705 | 0.2637 | 0.5497 | 0.5930 | 0.3063 | 0.5930 |
186
  | 11 | RSDO-DS4-NMT 1.2 | 0.8698 | 0.2781 | 0.5602 | 0.5970 | 0.3177 | 0.5970 |
187
 
188
- ### Slovene to English translation (first 10 models on the benchmark)
189
 
190
  | Rank | Title | BERT score | BLEU (avg) | METEOR (avg) | CHRF (avg) | BLEU (corpus) | CHRF (corpus) |
191
  |------|---------------------|------------|------------|--------------|------------|---------------|---------------|
 
133
  - The following corpora was excluded from MetaFida: dgt15_sl, classlawiki_sl, tweet_sl, janes_tweet, janes_forum, janes_news
134
  - Serbian Wikipedia was converted from Cyrillic to Latin
135
 
136
+ ## Training
137
+
138
+ The model was continually pre-trained on the Booster partition of [Leonardo HPC](https://www.hpc.cineca.it/systems/hardware/leonardo/) using [NVIDIA NeMo 2.0 framework](https://github.com/NVIDIA/NeMo). The model was trained in BF16-Mixed precision using tensor parallelism across 4 GPUs, sequence parallelism, and activation recomputation. The model was trained across 32 nodes, each containing 4 A100 64GB GPUs. The parallel alignment training took approximately 4 hours and second stage took approximately 40 hours.
139
+
140
+ The model was trained using a cosine learning rate scheduler with linear warmup and the following hyperparameters.
141
+
142
+ **Parallel alignment**:
143
+ - warmup steps: 150
144
+ - minimal learning rate: 5e-6
145
+ - maximal learning rate: 2e-5
146
+ - constant steps: 0
147
+ - batch size: 512 (4 million tokens)
148
+
149
+ **Second stage**:
150
+ - warmup steps: 500
151
+ - minimal learning rate: 5e-6
152
+ - maximal learning rate: 5e-5
153
+ - constant steps: 100
154
+ - batch size: 512 (4 million tokens)
155
+
156
  ## Evaluation
157
 
158
+ The models were evaluated using [Slovene SuperGLUE](https://slobench.cjvt.si/leaderboard/view/3) collection of classification tasks on [SloBench](https://slobench.cjvt.si). Instruct version of the model was also evaluated on translation [from English to Slovene](https://slobench.cjvt.si/leaderboard/view/8) and [from Slovene to English](https://slobench.cjvt.si/leaderboard/view/7). Additionally, we evaluated our models on [Slovenian-LLM-Eval](https://huggingface.co/datasets/cjvt/slovenian-llm-eval).
159
 
160
  Code for evaluation:
161
  - [SloBench tasks](https://github.com/SloLama/slobench_evaluation)
162
  - [Slovenian-LLM-Eval](https://github.com/SloLama/slovenian-llm-eval)
163
 
164
+ ### Slovenian-LLM-Eval results
165
 
166
  Comparison between GaMS models, base Gemma 2 models and SlovenianGPT (open source model for Slovene based on Mistral 7B) is shown in the figure below. All models were evaluated in 0-shot scenario.
167
 
168
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/652d40a78fa1fbb0aae165bb/tDyAjB2dgYXv1dLpFHikd.png)
169
 
170
+ ### Slobench Results
171
 
172
  GaMS 2B, 9B and 27B models were evaluated in 3-shot scenario, except for MultiRC and translation tasks, where 0-shot was used. GaMS-9B-Instruct was evaluated in 0-shot scenarion on all tasks. We used guided decoding to ensure the correct format of the responses.
173
 
174
+ #### Slovene SuperGLUE
175
 
176
  | Rank | Title | Average | BoolQ Accuracy | CB Accuracy | CB F1 Score | CB Average | COPA Accuracy | MultiRC EM | MultiRC F1a Score | MultiRC Average | RTE Accuracy | WSC Accuracy |
177
  |------|------------------------|---------|---------------|-------------|-------------|------------|--------------|------------|----------------|----------------|-------------|-------------|
 
189
  | 15 | GaMS-1B-Chat | 0.4570 | 0.8000 | 0.4880 | 0.3023 | 0.3951 | 0.4840 | 0.1081 | 0.2428 | 0.1755 | 0.5172 | 0.3692 |
190
 
191
 
192
+ #### English to Slovene translation (first 11 models on the benchmark)
193
 
194
  | Rank | Title | BERT score | BLEU (avg) | METEOR (avg) | CHRF (avg) | BLEU (corpus) | CHRF (corpus) |
195
  |------|---------------------------------|------------|------------|--------------|------------|---------------|---------------|
 
205
  | 9 | META LLAMA 3.1 405B | 0.8705 | 0.2637 | 0.5497 | 0.5930 | 0.3063 | 0.5930 |
206
  | 11 | RSDO-DS4-NMT 1.2 | 0.8698 | 0.2781 | 0.5602 | 0.5970 | 0.3177 | 0.5970 |
207
 
208
+ #### Slovene to English translation (first 10 models on the benchmark)
209
 
210
  | Rank | Title | BERT score | BLEU (avg) | METEOR (avg) | CHRF (avg) | BLEU (corpus) | CHRF (corpus) |
211
  |------|---------------------|------------|------------|--------------|------------|---------------|---------------|