cjvt
/

File size: 15,780 Bytes
e9a9551
 
 
 
 
 
 
 
 
 
 
 
 
 
c9964c4
e9a9551
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52394f0
e9a9551
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
---
library_name: transformers
license: gemma
language:
- sl
- en
- hr
- sr
- bs
base_model:
- google/gemma-2-2b
pipeline_tag: text-generation
---

# Model Card for GaMS-2B

GaMS-2B, GaMS-9B and GaMS-27B represent new improved and larger models of the GaMS (Generative Model for Slovene) familly. The models are based on Google's Gemma 2 familly and continually pretrained on Slovene, English and some portion of Croatian, Serbian and Bosnian corpora.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/652d40a78fa1fbb0aae165bb/94gX0PG8zRB_Zg31K2y_i.png)

## Acknowledgment

The model was developed within the [PoVeJMo](https://www.cjvt.si/povejmo/en/project/) research program (Adaptive Natural Language Processing with Large Language Models), particularly within the research project titled SloLLaMai -- Open-access computationally efficient models for Slovenian. The program is funded within the Recovery and Resilience Plan by the Slovenian Research and Innovation Agency (ARIS) and NextGenerationEU. The authors also acknowledge the financial support from the Slovenian Research and Innovation Agency (research core funding No. P6-0411 -- Language Resources and Technologies for Slovene).

We thank everyone who worked on data collection and preparation, enabling us to train our model. Special thanks go to Nikola Ljubešić, Taja Kuzman, Tjaša Arčon, Jaka Čibej, Simon Krek, Tomaž Erjavec, Iztok Kosem and Tomaž Savodnik.

## Basic information

- **Developed by:** team of researchers at the University of Ljubljana, Faculty for Computer and Information Science. Team members: Domen Vreš, Iztok Lebar Bajec, Tjaša Arčon, Gašper Jelovčan and Marko Robnik-Šikonja.
- **Languages:** Slovene, English (primary), Croatian, Bosnian and Serbian (secondary). The model might also work for other languages supported by Gemma 2, even though it was not continually pretrained on them.
- **Base model:** [google/gemma2-2b](https://huggingface.co/google/gemma-2-2b)
- **License:** [Gemma](https://ai.google.dev/gemma/terms)

## Usage

The model can be run through `pipeline` API using the following code:

```python
from transformers import pipeline

model_id = "cjvt/GaMS-2B"

pline = pipeline(
    "text-generation",
    model=model_id,
    device_map="cuda" # replace with "mps" to run on a Mac device
)

prompts = [
    "The examples of antonyms are:\nhigh => low\nwide => narrow\nbig =>",
    "Pristanek je bil prvi nadzorovani spust ameriškega vesoljskega plovila na površje Lune po Apollu 17 leta 1972, ko je na Luni pristala zadnja Nasina misija s posadko.\nDoslej so na Luni pristala vesoljska plovila le iz štirih drugih držav –",
    "U četvrtak je bila prva polufinalna večer Dore, a komentari na društvenim mrežama ne prestaju. U nedjeljno finale prošli su:"
]

sequences = pline(
    prompts,
    max_new_tokens=512,
    num_return_sequences=1
)

for seq in sequences:
    print("--------------------------")
    print(f"Result: {seq[0]['generated_text']}")
    print("--------------------------\n")
```

For multi GPU inference set the `device_map` to `auto`:

```python
from transformers import pipeline

model_id = "cjvt/GaMS-2B"

pline = pipeline(
    "text-generation",
    model=model_id,
    device_map="auto"
)

prompts = [
    "The examples of antonyms are:\nhigh => low\nwide => narrow\nbig =>",
    "Pristanek je bil prvi nadzorovani spust ameriškega vesoljskega plovila na površje Lune po Apollu 17 leta 1972, ko je na Luni pristala zadnja Nasina misija s posadko.\nDoslej so na Luni pristala vesoljska plovila le iz štirih drugih držav –",
    "U četvrtak je bila prva polufinalna večer Dore, a komentari na društvenim mrežama ne prestaju. U nedjeljno finale prošli su:"
]

sequences = pline(
    prompts,
    max_new_tokens=512,
    num_return_sequences=1
)

for seq in sequences:
    print("--------------------------")
    print(f"Result: {seq[0]['generated_text']}")
    print("--------------------------\n")
```

## Data

### CPT Data

Model was continually pre-trained in two stages. In the first stage, parallel English-Slovene (and Croatian in some cases) corpora was used to align the languages. In the second stage, the model was trained on separate English, Slovene, Croatian, Bosnian and Serbian corpora.

#### Parallel alignment corpora

| Corpus | Alignment level | # Tokens | Percentage |
| :----- | :------- | :------: | :--------: |
| KAS Abstracts | Document level | 31 M | 1.65 % |
| DGT | Separate documents | 697 M | 36.56 % |
| MaCoCu Parallel | Separate documents | 430 M | 22.53 % |
| CC-News | Paragraph level | 749 M | 39.25 % |
| Total | | 1.91 B | |

Explanation of each alignment level:
- Document level: Parallel documents were concatenated into a single document
- Separate documents: Parallel documents were not explicitly aligned
- Paragraph level: Paragraphs of parallel documents were concatenated (the first paragraph of Slovene/English document was followed by the first paragraph in the other language, which was then followed by the second paragraph in the first language and so on)

#### Second stage corpora

| Corpus | Language | # Tokens | Percentage |
| :----- | :------- | :------: | :--------: |
| [KAS](https://www.clarin.si/repository/xmlui/handle/11356/1448)	 | Slovene  | 2.77 B   | 20.34 %    |
| [MetaFida](https://www.clarin.si/repository/xmlui/handle/11356/1775)*	| Slovene | 4.66 B | 34.18 %    |
| [Wikipedia-En](https://huggingface.co/datasets/zidsi/wikipedia_markdown) (Date: January 23rd 2025) | English | 5.45 B |	39.99 % |
| [Wikipedia-Sl](https://huggingface.co/datasets/zidsi/wikipedia_markdown) (Date: January 1st 2025) | Slovene | 0.16 B |	1.19 %  |
| [Wikipedia-Hr](https://huggingface.co/datasets/zidsi/wikipedia_markdown) (Date: January 1st 2025) | Croatian | 0.15 B | 1.13 % |
| [Wikipedia-Bs](https://huggingface.co/datasets/zidsi/wikipedia_markdown) (Date: January 1st 2025) | Bosnian | 0.07 B | 0.50 %  |
| [Wikipedia-Sr-Latin](https://huggingface.co/datasets/zidsi/wikipedia_markdown)*  | Serbian | 0.36 B | 2.68 % |
| Total | | 13.62 B | |

Remarks:
- The following corpora was excluded from MetaFida: dgt15_sl, classlawiki_sl, tweet_sl, janes_tweet, janes_forum, janes_news
- Serbian Wikipedia was converted from Cyrillic to Latin

## Training

The model was continually pre-trained on the Booster partition of [Leonardo HPC](https://www.hpc.cineca.it/systems/hardware/leonardo/) using [NVIDIA NeMo 2.0 framework](https://github.com/NVIDIA/NeMo).  The model was trained in BF16-Mixed precision using tensor parallelism across 4 GPUs and sequence parallelism. The model was trained across 32 nodes, each containing 4 A100 64GB GPUs. The parallel alignment training took approximately 1 hour and second stage took approximately 10 hours.

The model was trained using a cosine learning rate scheduler with linear warmup and the following hyperparameters.

**Parallel alignment**:
- warmup steps: 150
- minimal learning rate: 5e-6
- maximal learning rate: 5e-5
- constant steps: 0
- batch size: 512 (4 million tokens)

**Second stage**:
- warmup steps: 250
- minimal learning rate: 1e-5
- maximal learning rate: 5e-5
- constant steps: 250
- batch size: 512 (4 million tokens)

## Evaluation

The models were evaluated using [Slovene SuperGLUE](https://slobench.cjvt.si/leaderboard/view/3) collection of classification tasks on [SloBench](https://slobench.cjvt.si). Additionally, we evaluated our models on [Slovenian-LLM-Eval](https://huggingface.co/datasets/cjvt/slovenian-llm-eval). 

Code for evaluation:
- [SloBench tasks](https://github.com/SloLama/slobench_evaluation)
- [Slovenian-LLM-Eval](https://github.com/SloLama/slovenian-llm-eval)

### Slovenian-LLM-Eval results

Comparison between GaMS models, base Gemma 2 models and SlovenianGPT (open source model for Slovene based on Mistral 7B) is shown in the figure below. All models were evaluated in 0-shot scenario.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/652d40a78fa1fbb0aae165bb/tDyAjB2dgYXv1dLpFHikd.png)

### Slobench Results

GaMS 2B, 9B and 27B models were evaluated in 3-shot scenario, except for MultiRC and translation tasks, where 0-shot was used. GaMS-9B-Instruct was evaluated in 0-shot scenarion on all tasks. We used guided decoding to ensure the correct format of the responses.

#### Slovene SuperGLUE

| Rank | Title                 | Average | BoolQ Accuracy | CB Accuracy | CB F1 Score | CB Average | COPA Accuracy | MultiRC EM | MultiRC F1a Score | MultiRC Average | RTE Accuracy | WSC Accuracy |
|------|------------------------|---------|---------------|-------------|-------------|------------|--------------|------------|----------------|----------------|-------------|-------------|
| 1    | GaMS-27B              | 0.7601  | 0.8333        | 0.6440      | 0.5864      | 0.6152     | 0.9540       | 0.3904     | 0.7504         | 0.5704         | 0.7931      | 0.7945      |
| 2    | PrešernGPT 0.1        | 0.7568  | 0.8333        | 0.8520      | 0.5868      | 0.7194     | 0.9740       | 0.4985     | 0.8061         | 0.6523         | 0.8276      | 0.5342      |
| 3    | Gemma 2 27B           | 0.7546  | 0.8333        | 0.6680      | 0.5972      | 0.6326     | 0.9140       | 0.4174     | 0.7295         | 0.5735         | 0.8276      | 0.7466      |
| 4    | GaMS-9B               | 0.7309  | 0.7000        | 0.8400      | 0.7955      | 0.8178     | 0.9000       | 0.3243     | 0.6551         | 0.4897         | 0.7931      | 0.6849      |
| 5    | GaMS-9B-Instruct      | 0.6997  | 0.8000        | 0.7960      | 0.7128      | 0.7544     | 0.8140       | 0.0721     | 0.6174         | 0.3447         | 0.7931      | 0.6918      |
| 6    | Gemma 2 9B            | 0.6980  | 0.8333        | 0.8240      | 0.5683      | 0.6962     | 0.8700       | 0.2282     | 0.5310         | 0.3796         | 0.7241      | 0.6849      |
| 8    | CroSloEngual BERT     | 0.6078  | 0.7333        | 0.7920      | 0.7437      | 0.7679     | 0.5720       | 0.0931     | 0.5241         | 0.3086         | 0.6552      | 0.6096      |
| 11   | SlovenianGPT-Chat     | 0.5078  | 0.7333        | 0.3920      | 0.3829      | 0.3874     | 0.6840       | 0.2432     | 0.4944         | 0.3688         | 0.5172      | 0.3562      |
| 12   | Gemma 2 2B            | 0.4876  | 0.6333        | 0.4520      | 0.2123      | 0.3321     | 0.5180       | 0.1471     | 0.4419         | 0.2945         | 0.5862      | 0.5616      |
| 13   | GaMS-2B               | 0.4790  | 0.5667        | 0.6080      | 0.4880      | 0.5480     | 0.5240       | 0.0631     | 0.5234         | 0.2932         | 0.5517      | 0.3904      |
| 14   | GaMS-1B               | 0.4604  | 0.5000        | 0.6200      | 0.4565      | 0.5382     | 0.4920       | 0.1351     | 0.2675         | 0.2013         | 0.4828      | 0.5479      |
| 15   | GaMS-1B-Chat          | 0.4570  | 0.8000        | 0.4880      | 0.3023      | 0.3951     | 0.4840       | 0.1081     | 0.2428         | 0.1755         | 0.5172      | 0.3692      |


## Usage and Limitations (taken from Gemma 2)

These models have certain limitations that users should be aware of.

### Intended Usage

Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.

* Content Creation and Communication
  * Text Generation: These models can be used to generate creative text formats
    such as poems, scripts, code, marketing copy, and email drafts.
  * Chatbots and Conversational AI: Power conversational interfaces for customer
    service, virtual assistants, or interactive applications.
  * Text Summarization: Generate concise summaries of a text corpus, research
    papers, or reports.
* Research and Education
  * Natural Language Processing (NLP) Research: These models can serve as a
    foundation for researchers to experiment with NLP techniques, develop
    algorithms, and contribute to the advancement of the field.
  * Language Learning Tools: Support interactive language learning experiences,
    aiding in grammar correction or providing writing practice.
  * Knowledge Exploration: Assist researchers in exploring large bodies of text
    by generating summaries or answering questions about specific topics.

### Limitations

* Training Data
  * The quality and diversity of the training data significantly influence the
    model's capabilities. Biases or gaps in the training data can lead to
    limitations in the model's responses.
  * The scope of the training dataset determines the subject areas the model can
    handle effectively.
* Context and Task Complexity
  * LLMs are better at tasks that can be framed with clear prompts and
    instructions. Open-ended or highly complex tasks might be challenging.
  * A model's performance can be influenced by the amount of context provided
    (longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
  * Natural language is inherently complex. LLMs might struggle to grasp subtle
    nuances, sarcasm, or figurative language.
* Factual Accuracy
  * LLMs generate responses based on information they learned from their
    training datasets, but they are not knowledge bases. They may generate
    incorrect or outdated factual statements.
* Common Sense
  * LLMs rely on statistical patterns in language. They might lack the ability
    to apply common sense reasoning in certain situations.

### Ethical Considerations and Risks

The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:

* Bias and Fairness
  * LLMs trained on large-scale, real-world text data can reflect socio-cultural
    biases embedded in the training material. These models underwent careful
    scrutiny, input data pre-processing described and posterior evaluations
    reported in this card.
* Misinformation and Misuse
  * LLMs can be misused to generate text that is false, misleading, or harmful.
  * Guidelines are provided for responsible use with the model, see the
    [Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
  * This model card summarizes details on the models' architecture,
    capabilities, limitations, and evaluation processes.
  * A responsibly developed open model offers the opportunity to share
    innovation by making LLM technology accessible to developers and researchers
    across the AI ecosystem.

Risks identified and mitigations:

* Perpetuation of biases: It's encouraged to perform continuous monitoring
  (using evaluation metrics, human review) and the exploration of de-biasing
  techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
  are essential. Developers are encouraged to exercise caution and implement
  appropriate content safety safeguards based on their specific product policies
  and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
  end-user education can help mitigate against malicious applications of LLMs.
  Educational resources and reporting mechanisms for users to flag misuse are
  provided. Prohibited uses of Gemma models are outlined in the
  [Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
  (Personally Identifiable Information). Developers are encouraged to adhere to
  privacy regulations with privacy-preserving techniques.