Add paper link and abstract to model card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +7 -133
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: cc-by-4.0
3
  language:
4
  - cs
5
  - pl
@@ -7,6 +6,7 @@ language:
7
  - sl
8
  - en
9
  library_name: transformers
 
10
  tags:
11
  - translation
12
  - mt
@@ -16,6 +16,7 @@ tags:
16
  - multilingual
17
  - allegro
18
  - laniqo
 
19
  ---
20
 
21
  # MultiSlav BiDi Models
@@ -28,9 +29,11 @@ tags:
28
  ## Multilingual BiDi MT Models
29
 
30
  ___BiDi___ is a collection of Encoder-Decoder vanilla transformer models trained on sentence-level Machine Translation task.
31
- Each model is supporting Bi-Directional translation.
 
32
 
33
- ___BiDi___ models are part of the [___MultiSlav___ collection](https://huggingface.co/collections/allegro/multislav-6793d6b6419e5963e759a683). More information will be available soon in our upcoming MultiSlav paper.
 
34
 
35
  Experiments were conducted under research project by [Machine Learning Research](https://ml.allegro.tech/) lab for [Allegro.com](https://ml.allegro.tech/).
36
  Big thanks to [laniqo.com](laniqo.com) for cooperation in the research.
@@ -115,149 +118,20 @@ All training parameters are listed in table below.
115
 
116
  ### Training hyperparameters:
117
 
118
- | **Hyperparameter** | **Value** |
119
- |----------------------------|------------------------------------------------------------------------------------------------------------|
120
- | Total Parameter Size | 209M |
121
- | Vocab Size | 32k |
122
- | Base Parameters | [Marian transfromer-big](https://github.com/marian-nmt/marian-dev/blob/master/src/common/aliases.cpp#L113) |
123
- | Number of Encoding Layers | 6 |
124
- | Number of Decoding Layers | 6 |
125
- | Model Dimension | 1024 |
126
- | FF Dimension | 4096 |
127
- | Heads | 16 |
128
- | Dropout | 0.1 |
129
- | Batch Size | mini batch fit to VRAM |
130
- | Training Accelerators | 4x A100 40GB |
131
- | Max Length | 100 tokens |
132
- | Optimizer | Adam |
133
- | Warmup steps | 8000 |
134
- | Context | Sentence-level MT |
135
- | Languages Supported | See [Bi-Di models available](#Bi-Di-models-available) |
136
- | Precision | float16 |
137
- | Validation Freq | 3000 steps |
138
- | Stop Metric | ChrF |
139
- | Stop Criterion | 20 Validation steps |
140
-
141
 
142
  ## Training corpora
143
 
144
- The main research question was: "How does adding additional, related languages impact the quality of the model?" - we explored it in the Slavic language family.
145
- ___BiDi___ models are our baseline before expanding the data-regime by using higher-level multilinguality.
146
-
147
- Datasets were downloaded via [MT-Data](https://pypi.org/project/mtdata/0.2.10/) library.
148
- The number of total examples post filtering and deduplication varies, depending on languages supported, see the table below.
149
-
150
- | **Language pair** | **Number of training examples** |
151
- |-------------------|--------------------------------:|
152
- | Czech ↔ Polish | 63M |
153
- | Czech ↔ Slovak | 30M |
154
- | Czech ↔ Slovene | 25M |
155
- | Polish ↔ Slovak | 26M |
156
- | Polish ↔ Slovene | 23M |
157
- | Slovak ↔ Slovene | 18M |
158
- | ---------------- | ------------------------------- |
159
- | Czech ↔ English | 151M |
160
- | English ↔ Polish | 150M |
161
- | English ↔ Slovak | 52M |
162
- | English ↔ Slovene | 40M |
163
-
164
- The datasets used (only applicable to specific directions):
165
-
166
- | **Corpus** |
167
- |----------------------|
168
- | paracrawl |
169
- | opensubtitles |
170
- | multiparacrawl |
171
- | dgt |
172
- | elrc |
173
- | xlent |
174
- | wikititles |
175
- | wmt |
176
- | wikimatrix |
177
- | dcep |
178
- | ELRC |
179
- | tildemodel |
180
- | europarl |
181
- | eesc |
182
- | eubookshop |
183
- | emea |
184
- | jrc_acquis |
185
- | ema |
186
- | qed |
187
- | elitr_eca |
188
- | EU-dcep |
189
- | rapid |
190
- | ecb |
191
- | kde4 |
192
- | news_commentary |
193
- | kde |
194
- | bible_uedin |
195
- | europat |
196
- | elra |
197
- | wikipedia |
198
- | wikimedia |
199
- | tatoeba |
200
- | globalvoices |
201
- | euconst |
202
- | ubuntu |
203
- | php |
204
- | ecdc |
205
- | eac |
206
- | eac_reference |
207
- | gnome |
208
- | EU-eac |
209
- | books |
210
- | EU-ecdc |
211
- | newsdev |
212
- | khresmoi_summary |
213
- | czechtourism |
214
- | khresmoi_summary_dev |
215
- | worldbank |
216
 
217
  ## Evaluation
218
 
219
- Evaluation of the models was performed on [Flores200](https://huggingface.co/datasets/facebook/flores) dataset.
220
- The table below compares performance of the open-source models and all applicable models from our collection.
221
- Metric used: Unbabel/wmt22-comet-da.
222
-
223
- | **Direction** | **CES → ENG** | **CES → POL** | **CES → SLK** | **CES → SLV** | **ENG → CES** | **ENG → POL** | **ENG → SLK** | **ENG → SLV** | **POL → CES** | **POL → ENG** | **POL → SLK** | **POL → SLV** | **SLK → CES** | **SLK → ENG** | **SLK → POL** | **SLK → SLV** | **SLV → CES** | **SLV → ENG** | **SLV → POL** | **SLV → SLK** |
224
- |----------------------------------------------------|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
225
- | **M2M-100** | 87.0 | 89.0 | 92.1 | 89.7 | 88.6 | 86.4 | 88.4 | 87.3 | 89.6 | 84.6 | 89.4 | 88.4 | 92.7 | 86.8 | 89.1 | 89.6 | 90.3 | 86.4 | 88.7 | 90.1 |
226
- | **NLLB-200** | 88.1 | 88.9 | 91.2 | 88.6 | 90.4 | __88.5__ | 90.1 | 88.8 | 89.4 | __85.8__ | 88.9 | 87.7 | 91.8 | 88.2 | 88.9 | 88.8 | 90.0 | __87.5__ | 88.6 | 89.4 |
227
- | **Seamless-M4T** | 87.5 | 80.9 | 90.8 | 82.0 | __90.7__ | __88.5__ | __90.6__ | __89.6__ | 79.6 | 85.4 | 80.0 | 76.4 | 91.5 | 87.2 | 81.2 | 82.9 | 80.9 | 87.3 | 76.7 | 81.0 |
228
- | **OPUS-MT Sla-Sla** | __88.2__ | 82.8 | - | 83.4 | 89.1 | 85.6 | - | 84.5 | 82.9 | 82.2 | - | 81.2 | - | - | - | - | 83.5 | 84.1 | 80.8 | - |
229
- | **OPUS-MT SK-EN** | - | - | - | - | - | - | 89.5 | - | - | - | - | - | - | __88.4__ | - | - | - | - | - | - |
230
- | _Our contributions:_ | | | | | | | | | | | | | | | | | | | | |
231
- | **BiDi Models**<span style="color:green;">*</span> | 87.5 | 89.4 | 92.4 | 89.8 | 87.8 | 86.2 | 87.2 | 86.6 | 90.0 | 85.0 | 89.1 | 88.4 | 92.9 | 87.3 | 88.8 | 89.4 | 90.0 | 86.9 | 88.1 | 89.1 |
232
- | **P4-pol**<span style="color:red;">◊</span> | - | 89.6 | 90.8 | 88.7 | - | - | - | - | 90.2 | - | 89.8 | 88.7 | 91.0 | - | 89.3 | 88.4 | 89.3 | - | 88.7 | 88.5 |
233
- | **P5-eng**<span style="color:red;">◊</span> | 88.0 | 89.0 | 90.7 | 89.0 | 88.8 | 87.3 | 88.4 | 87.5 | 89.0 | 85.7 | 88.5 | 87.8 | 91.0 | 88.2 | 88.6 | 88.5 | 89.6 | 87.2 | 88.4 | 88.9 |
234
- | **P5-ces**<span style="color:red;">◊</span> | 87.9 | 89.6 | __92.5__ | 89.9 | 88.4 | 85.0 | 87.9 | 85.9 | 90.3 | 84.5 | 89.5 | 88.0 | __93.0__ | 87.8 | 89.4 | 89.8 | 90.3 | 85.7 | 87.9 | 89.8 |
235
- | **MultiSlav-4slav** | - | 89.7 | __92.5__ | 90.0 | - | - | - | - | 90.2 | - | 89.6 | 88.7 | 92.9 | - | 89.4 | 90.1 | __90.6__ | - | 88.9 | __90.2__ |
236
- | **MultiSlav-5lang** | 87.8 | __89.8__ | __92.5__ | __90.1__ | 88.9 | 86.9 | 88.0 | 87.3 | __90.4__ | 85.4 | 89.8 | __88.9__ | 92.9 | 87.8 | __89.6__ | __90.2__ | __90.6__ | 87.0 | __89.2__ | __90.2__ |
237
-
238
- <span style="color:red;">◊</span> system of 2 models *Many2XXX* and *XXX2Many*, see [P5-ces2many](https://huggingface.co/allegro/p5-ces2many)
239
-
240
- <span style="color:green;">*</span> results combined for all bi-directional models; each values for applicable model
241
 
242
  ## Limitations and Biases
243
 
244
- We did not evaluate inherent bias contained in training datasets. It is advised to validate bias of our models in perspective domain. This might be especially problematic in translation from English to Slavic languages, which require explicitly indicated gender and might hallucinate based on bias present in training data.
245
-
246
  ## License
247
 
248
- The model is licensed under CC BY 4.0, which allows for commercial use.
249
-
250
  ## Citation
251
  TO BE UPDATED SOON 🤗
252
 
253
 
254
 
255
- ## Contact Options
256
-
257
- Authors:
258
- - MLR @ Allegro: [Artur Kot](https://linkedin.com/in/arturkot), [Mikołaj Koszowski](https://linkedin.com/in/mkoszowski), [Wojciech Chojnowski](https://linkedin.com/in/wojciech-chojnowski-744702348), [Mieszko Rutkowski](https://linkedin.com/in/mieszko-rutkowski)
259
- - Laniqo.com: [Artur Nowakowski](https://linkedin.com/in/artur-nowakowski-mt), [Kamil Guttmann](https://linkedin.com/in/kamil-guttmann), [Mikołaj Pokrywka](https://linkedin.com/in/mikolaj-pokrywka)
260
-
261
- Please don't hesitate to contact authors if you have any questions or suggestions:
262
263
- - LinkedIn: [Artur Kot](https://linkedin.com/in/arturkot) or [Mikołaj Koszowski](https://linkedin.com/in/mkoszowski)
 
1
  ---
 
2
  language:
3
  - cs
4
  - pl
 
6
  - sl
7
  - en
8
  library_name: transformers
9
+ license: cc-by-4.0
10
  tags:
11
  - translation
12
  - mt
 
16
  - multilingual
17
  - allegro
18
  - laniqo
19
+ pipeline_tag: translation
20
  ---
21
 
22
  # MultiSlav BiDi Models
 
29
  ## Multilingual BiDi MT Models
30
 
31
  ___BiDi___ is a collection of Encoder-Decoder vanilla transformer models trained on sentence-level Machine Translation task.
32
+ Each model is supporting Bi-Directional translation. More information is available in our [MultiSlav paper](https://hf.co/papers/2502.14509).
33
+
34
 
35
+
36
+ ___BiDi___ models are part of the [___MultiSlav___ collection](https://huggingface.co/collections/allegro/multislav-6793d6b6419e5963e759a683).
37
 
38
  Experiments were conducted under research project by [Machine Learning Research](https://ml.allegro.tech/) lab for [Allegro.com](https://ml.allegro.tech/).
39
  Big thanks to [laniqo.com](laniqo.com) for cooperation in the research.
 
118
 
119
  ### Training hyperparameters:
120
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
 
122
  ## Training corpora
123
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
 
125
  ## Evaluation
126
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
127
 
128
  ## Limitations and Biases
129
 
 
 
130
  ## License
131
 
 
 
132
  ## Citation
133
  TO BE UPDATED SOON 🤗
134
 
135
 
136
 
137
+ ## Contact Options