MultiSlav P5-ces2many
Multilingual Czech-to-Many MT Model
P5-ces2many is an Encoder-Decoder vanilla transformer model trained on sentence-level Machine Translation task. Model is supporting translation from Czech language to 4 languages: English, Polish, Slovak and Slovene. This model is part of the MultiSlav collection. More information will be available soon in our upcoming MultiSlav paper.
Experiments were conducted under research project by Machine Learning Research lab for Allegro.com. Big thanks to laniqo.com for cooperation in the research.
P5-ces2many - 5-language Czech-to-Many model translating from Czech to all applicable languages This model and P5-many2ces combine into P5-ces pivot system translating between 5 languages. P5-ces translates all supported languages using Many2One model to Czech bridge sentence and next using the One2Many model from Czech bridge sentence to target language.
Model description
- Model name: P5-many2ces
- Source Language: Czech
- Target Languages: English, Polish, Slovak, Slovene
- Model Collection: MultiSlav
- Model type: MarianMTModel Encoder-Decoder
- License: CC BY 4.0 (commercial use allowed)
- Developed by: MLR @ Allegro & Laniqo.com
Supported languages
Using model you must specify target language for translation. Target language tokens are represented as 3-letter ISO 639-3 language codes embedded in a format >>xxx<<. All accepted directions and their respective tokens are listed below. Each of them was added as a special token to Sentence-Piece tokenizer.
Target Language | First token |
---|---|
English | >>eng<< |
Polish | >>pol<< |
Slovak | >>slk<< |
Slovene | >>slv<< |
Use case quickstart
Example code-snippet to use model. Due to bug the MarianMTModel
must be used explicitly.
from transformers import AutoTokenizer, MarianMTModel
o2m_model_name = "Allegro/P5-ces2many"
o2m_tokenizer = AutoTokenizer.from_pretrained(o2m_model_name)
o2m_model = MarianMTModel.from_pretrained(o2m_model_name)
text = "Allegro je online platforma pro e-commerce, na které své produkty prodávají střední a malé firmy, stejně jako velké značky."
target_languages = ["eng", "pol", "slk", "slv"]
batch_to_translate = [
f">>{lang}<<" + " " + text for lang in target_languages
]
translations = o2m_model.generate(**o2m_tokenizer.batch_encode_plus(batch_to_translate, return_tensors="pt"))
bridge_translations = o2m_tokenizer.batch_decode(translations, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for trans in bridge_translations:
print(trans)
Generated English output:
Allegro is an online e-commerce platform on which medium and small businesses as well as large brands sell their products.
Generated Polish output:
Allegro to platforma e-commerce online, na której swoje produkty sprzedają średnie i małe firmy, a także duże marki.
Generated Slovak output:
Allegro je online platforma pre e-commerce, na ktorej svoje produkty predávajú stredné a malé firmy, rovnako ako veľké značky.
Generated Slovene output:
Allegro je spletna platforma za e-poslovanje, na kateri prodajajo svoje izdelke srednje velika in mala podjetja ter velike blagovne znamke.
To pivot-translate to other languages via bridge Czech sentence, we need One2Many model. Many2One model requires explicit source language token as well. Example for translating from Polish to Slovak:
from transformers import AutoTokenizer, MarianMTModel
m2o_model_name = "Allegro/P5-many2ces"
o2m_model_name = "Allegro/P5-ces2many"
m2o_tokenizer = AutoTokenizer.from_pretrained(m2o_model_name)
m2o_model = MarianMTModel.from_pretrained(m2o_model_name)
o2m_tokenizer = AutoTokenizer.from_pretrained(o2m_model_name)
o2m_model = MarianMTModel.from_pretrained(o2m_model_name)
text = ">>pol<<" + " " + "Allegro to internetowa platforma e-commerce, na której swoje produkty sprzedają średnie i małe firmy, jak również duże marki."
translation = m2o_model.generate(**m2o_tokenizer.batch_encode_plus([text], return_tensors="pt"))
bridge_translations = m2o_tokenizer.batch_decode(translation, skip_special_tokens=True, clean_up_tokenization_spaces=True)
post_edited_bridge = ">>slk<<" + " " + bridge_translations[0]
translation = o2m_model.generate(**o2m_tokenizer.batch_encode_plus([post_edited_bridge], return_tensors="pt"))
decoded_translations = o2m_tokenizer.batch_decode(translation, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(decoded_translations[0])
Generated Polish to Slovak pivot translation via Czech:
Allegro je online e-commerce platforma, na ktorej svoje produkty predávajú stredné a malé firmy, rovnako ako veľké značky.
Training
SentencePiece tokenizer has a vocab size 80k in total (16k per language). Tokenizer was trained on randomly sampled part of the training corpus. During the training we used the MarianNMT framework. Base marian configuration used: transfromer-big. All training parameters are listed in table below.
Training hyperparameters:
Hyperparameter | Value |
---|---|
Total Parameter Size | 258M |
Training Examples | 269M |
Vocab Size | 80k |
Base Parameters | Marian transfromer-big |
Number of Encoding Layers | 6 |
Number of Decoding Layers | 6 |
Model Dimension | 1024 |
FF Dimension | 4096 |
Heads | 16 |
Dropout | 0.1 |
Batch Size | mini batch fit to VRAM |
Training Accelerators | 4x A100 40GB |
Max Length | 100 tokens |
Optimizer | Adam |
Warmup steps | 8000 |
Context | Sentence-level MT |
Source Language Supported | Czech |
Target Languages Supported | English, Polish, Slovak, Slovene |
Precision | float16 |
Validation Freq | 3000 steps |
Stop Metric | ChrF |
Stop Criterion | 20 Validation steps |
Training corpora
The main research question was: "How does adding additional, related languages impact the quality of the model?" - we explored it in the Slavic language family. In this model we experimented with expanding data-regime by using data from multiple target language and expanding language-pool by adding English. We found that additional data clearly improved performance compared to the bi-directional baseline models. For example in translation from Polish to Czech, this allowed us to expand training data-size from 63M to 269M examples, and from 25M to 269M for Slovene to Czech translation. We only used explicitly open-source data to ensure open-source license of our model.
Datasets were downloaded via MT-Data library. Number of total examples post filtering and deduplication: 269M.
The datasets used:
Corpus |
---|
paracrawl |
opensubtitles |
multiparacrawl |
dgt |
elrc |
xlent |
wikititles |
wmt |
wikimatrix |
dcep |
ELRC |
tildemodel |
europarl |
eesc |
eubookshop |
emea |
jrc_acquis |
ema |
qed |
elitr_eca |
EU-dcep |
rapid |
ecb |
kde4 |
news_commentary |
kde |
bible_uedin |
europat |
elra |
wikipedia |
wikimedia |
tatoeba |
globalvoices |
euconst |
ubuntu |
php |
ecdc |
eac |
eac_reference |
gnome |
EU-eac |
books |
EU-ecdc |
newsdev |
khresmoi_summary |
czechtourism |
khresmoi_summary_dev |
worldbank |
Evaluation
Evaluation of the models was performed on Flores200 dataset. The table below compares performance of the open-source models and all applicable models from our collection. Metrics BLEU, ChrF2, and Unbabel/wmt22-comet-da.
Translation results on translation from Czech to Polish (Slavic direction with the highest data-regime):
Model | Comet22 | BLEU | ChrF | Model Size |
---|---|---|---|---|
M2M−100 | 89.0 | 18.3 | 48.0 | 1.2B |
NLLB−200 | 88.9 | 17.5 | 47.3 | 1.3B |
Opus Sla-Sla | 82.8 | 13.6 | 43.5 | 64M |
BiDi-ces-pol (baseline) | 89.4 | 19.2 | 49.2 | 209M |
P4-pol ◊ | 89.6 | 19.3 | 49.5 | 2x 242M |
P5-eng ◊ | 89.0 | 18.5 | 48.7 | 2x 258M |
P5-ces2many * | 89.6 | 19.0 | 49.0 | 258M |
MultiSlav-4slav | 89.7 | 18.9 | 49.2 | 242M |
MultiSlav-5lang | 89.8 | 19.0 | 49.3 | 258M |
Translation results on translation from Czech to Slovene (direction to Czech with the lowest data-regime):
Model | Comet22 | BLEU | ChrF | Model Size |
---|---|---|---|---|
M2M−100 | 89.7 | 26.0 | 54.6 | 1.2B |
NLLB−200 | 88.6 | 23.0 | 51.5 | 1.3B |
Opus Sla-Sla | 83.4 | 18.6 | 48.2 | 1.3B |
BiDi-ces-slv (baseline) | 89.8 | 27.0 | 55.5 | 209M |
P4-pol ◊ | 88.7 | 24.9 | 53.6 | 2x 242M |
P5-eng ◊ | 89.0 | 25.7 | 54.8 | 2x 258M |
P5-ces2many * | 89.9 | 26.7 | 55.3 | 258M |
MultiSlav-4slav | 90.0 | 27.5 | 55.9 | 242M |
MultiSlav-5lang | 90.1 | 27.4 | 55.8 | 258M |
* this model is One2Many part of P5-ces pivot system.
◊ system of 2 models Many2XXX and XXX2Many.
Limitations and Biases
We did not evaluate inherent bias contained in training datasets. It is advised to validate bias of our models in perspective domain. This might be especially problematic in translation from English to Slavic languages, which require explicitly indicated gender and might hallucinate based on bias present in training data.
License
The model is licensed under CC BY 4.0, which allows for commercial use.
Citation
TO BE UPDATED SOON 🤗
Contact Options
Authors:
- MLR @ Allegro: Artur Kot, Mikołaj Koszowski, Wojciech Chojnowski, Mieszko Rutkowski
- Laniqo.com: Artur Nowakowski, Kamil Guttmann, Mikołaj Pokrywka
Please don't hesitate to contact authors if you have any questions or suggestions:
- e-mail: [email protected] or [email protected]
- LinkedIn: Artur Kot or Mikołaj Koszowski
- Downloads last month
- 23