MultiSlav P5-many2ces

MLR @ Allegro.com

Multilingual Many-to-Czech MT Model

P5-many2ces is an Encoder-Decoder vanilla transformer model trained on sentence-level Machine Translation task. Model is supporting translation from 4 languages: English, Polish, Slovak, and Slovene to Czech. This model is part of the MultiSlav collection. More information will be available soon in our upcoming MultiSlav paper.

Experiments were conducted under research project by Machine Learning Research lab for Allegro.com. Big thanks to laniqo.com for cooperation in the research.

P5-many2ces - 5-language Many-to-Czech model translating from all applicable languages to Czech. This model and P5-ces2many combine into P5-ces pivot system translating between 5 languages. P5-ces translates all supported languages using Many2One model to Czech bridge sentence and next using the One2Many model from Czech bridge sentence to target language.

Model description

  • Model name: P5-many2ces
  • Source Languages: English, Polish, Slovak, Slovene
  • Target Language: Czech
  • Model Collection: MultiSlav
  • Model type: MarianMTModel Encoder-Decoder
  • License: CC BY 4.0 (commercial use allowed)
  • Developed by: MLR @ Allegro & Laniqo.com

Supported languages

Using model you must specify source language for translation. Source language tokens are represented as 3-letter ISO 639-3 language codes embedded in a format >>xxx<<. All accepted directions and their respective tokens are listed below. Each of them was added as a special token to Sentence-Piece tokenizer.

Source Language First token
English >>eng<<
Polish >>pol<<
Slovak >>slk<<
Slovene >>slv<<

Use case quickstart

Example code-snippet to use model. Due to bug the MarianMTModel must be used explicitly.

from transformers import AutoTokenizer, MarianMTModel

m2o_model_name = "Allegro/P5-many2ces"

m2o_tokenizer = AutoTokenizer.from_pretrained(m2o_model_name)
m2o_model = MarianMTModel.from_pretrained(m2o_model_name)

text = ">>pol<<" + " " + "Allegro to internetowa platforma e-commerce, na której swoje produkty sprzedają średnie i małe firmy, jak również duże marki."

translations = m2o_model.generate(**m2o_tokenizer.batch_encode_plus([text], return_tensors="pt"))
bridge_translation = m2o_tokenizer.batch_decode(translations, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(bridge_translation[0])

Generated bridge Czech output:

Allegro je online e-commerce platforma, na které své produkty prodávají střední a malé firmy, stejně jako velké značky.

To pivot-translate to other languages via bridge Czech sentence, we need One2Many model. One2Many model requires explicit target language token as well:


o2m_model_name = "Allegro/P5-ces2many"

o2m_tokenizer = AutoTokenizer.from_pretrained(o2m_model_name)
o2m_model = MarianMTModel.from_pretrained(o2m_model_name)

texts_to_translate = [
    ">>eng<<" + bridge_translation[0],
    ">>slk<<" + bridge_translation[0],
    ">>slv<<" + bridge_translation[0]
]
translation = o2m_model.generate(**o2m_tokenizer.batch_encode_plus(texts_to_translate, return_tensors="pt"))
decoded_translations = o2m_tokenizer.batch_decode(translation, skip_special_tokens=True, clean_up_tokenization_spaces=True)

for trans in decoded_translations:
    print(trans)

Generated Polish to English pivot translation via Czech:

Allegro is an online e-commerce platform on which medium and small businesses as well as large brands sell their products.

Generated Polish to Slovak pivot translation via Czech:

Allegro je online e-commerce platforma, na ktorej svoje produkty predávajú stredné a malé firmy, rovnako ako veľké značky.

Generated Polish to Slovene pivot translation via Czech:

Allegro je spletna e-poslovanje platforma, na kateri prodajajo svoje izdelke srednje velika in mala podjetja ter velike blagovne znamke.

Training

SentencePiece tokenizer has a vocab size 80k in total (16k per language). Tokenizer was trained on randomly sampled part of the training corpus. During the training we used the MarianNMT framework. Base marian configuration used: transfromer-big. All training parameters are listed in table below.

Training hyperparameters:

Hyperparameter Value
Total Parameter Size 258M
Training Examples 269M
Vocab Size 80k
Base Parameters Marian transfromer-big
Number of Encoding Layers 6
Number of Decoding Layers 6
Model Dimension 1024
FF Dimension 4096
Heads 16
Dropout 0.1
Batch Size mini batch fit to VRAM
Training Accelerators 4x A100 40GB
Max Length 100 tokens
Optimizer Adam
Warmup steps 8000
Context Sentence-level MT
Source Languages Supported English, Polish, Slovak, Slovene
Target Language Supported Czech
Precision float16
Validation Freq 3000 steps
Stop Metric ChrF
Stop Criterion 20 Validation steps

Training corpora

The main research question was: "How does adding additional, related languages impact the quality of the model?" - we explored it in the Slavic language family. In this model we experimented with expanding data-regime by using data from multiple source language and expanding language-pool by adding English. We found that additional fluency data clearly improved performance compared to the bi-directional baseline models. For example in translation from Polish to Czech, this allowed us to expand training data-size from 63M to 269M examples, and from 25M to 269M for Slovene to Czech translation. We only used explicitly open-source data to ensure open-source license of our model.

Datasets were downloaded via MT-Data library. Number of total examples post filtering and deduplication: 269M.

The datasets used:

Corpus
paracrawl
opensubtitles
multiparacrawl
dgt
elrc
xlent
wikititles
wmt
wikimatrix
dcep
ELRC
tildemodel
europarl
eesc
eubookshop
emea
jrc_acquis
ema
qed
elitr_eca
EU-dcep
rapid
ecb
kde4
news_commentary
kde
bible_uedin
europat
elra
wikipedia
wikimedia
tatoeba
globalvoices
euconst
ubuntu
php
ecdc
eac
eac_reference
gnome
EU-eac
books
EU-ecdc
newsdev
khresmoi_summary
czechtourism
khresmoi_summary_dev
worldbank

Evaluation

Evaluation of the models was performed on Flores200 dataset. The table below compares performance of the open-source models and all applicable models from our collection. Metrics BLEU, ChrF2, and Unbabel/wmt22-comet-da.

Translation results on translation from Polish to Czech (Slavic direction with the highest data-regime):

Model Comet22 BLEU ChrF Model Size
M2M−100 89.6 19.8 47.7 1.2B
NLLB−200 89.4 19.2 46.7 1.3B
Opus Sla-Sla 82.9 14.6 42.6 64M
BiDi-ces-pol (baseline) 90.0 20.3 48.5 209M
P4-pol 90.2 20.2 48.5 2x 242M
P5-eng 89.0 19.9 48.3 2x 258M
P5-many2ces * 90.3 20.2 48.6 258M
MultiSlav-4slav 90.2 20.6 48.7 242M
MultiSlav-5lang 90.4 20.7 48.9 258M

Translation results on translation from Slovene to Czech (direction to Czech with the lowest data-regime):

Model Comet22 BLEU ChrF Model Size
M2M−100 90.3 24.3 51.6 1.2B
NLLB−200 90.0 22.5 49.9 1.3B
Opus Sla-Sla 83.5 17.4 46.0 1.3B
BiDi-ces-slv (baseline) 90.0 24.4 52.0 209M
P4-pol 89.3 22.7 50.4 2x 242M
P5-eng 89.6 24.7 52.4 2x 258M
P5-many2ces * 90.3 24.9 52.4 258M
MultiSlav-4slav 90.6 25.3 52.7 242M
MultiSlav-5lang 90.6 25.2 52.5 258M

* this model is Many2One part of P5-ces pivot system.

system of 2 models Many2XXX and XXX2Many.

Limitations and Biases

We did not evaluate inherent bias contained in training datasets. It is advised to validate bias of our models in perspective domain. This might be especially problematic in translation from English to Slavic languages, which require explicitly indicated gender and might hallucinate based on bias present in training data.

License

The model is licensed under CC BY 4.0, which allows for commercial use.

Citation

TO BE UPDATED SOON 🤗

Contact Options

Authors:

Please don't hesitate to contact authors if you have any questions or suggestions:

Downloads last month
21
Safetensors
Model size
258M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Collection including allegro/p5-many2ces