mt5-small-ukrainian-style-editor

This model is a fine-tuned version of google/mt5-small designed for stylistic editing of Ukrainian texts. It transforms raw or non-native phrasing into improved, stylistically polished Ukrainian, making it suitable for academic, journalistic, or official contexts..

It achieves the following results on the evaluation set:

  • Loss: 0.2027
  • Score: 41.4271
  • Counts: [18650, 13567, 10522, 7822]
  • Totals: [25663, 22534, 19416, 16463]
  • Precisions: [72.67271947940615, 60.206798615425576, 54.192418623815406, 47.51260402113831]
  • Bp: 0.7151
  • Sys Len: 25663
  • Ref Len: 34270

๐Ÿง  Model Description

This model was trained using a hybrid approach, combining:

  • Dictionary-based style correction (e.g., calque removal).
  • Fine-tuning on paragraph-aligned pairs of original and stylistically improved Ukrainian text.

The base model is multilingual T5 (mT5), allowing flexible encoder-decoder performance and cross-lingual generalization, adapted to the specifics of Ukrainian syntax and style.

๐Ÿ“Œ Intended Uses & Limitations

โœ… Intended Uses

  • Stylistic enhancement of Ukrainian texts.
  • Detection and correction of translationese or poor phrasing.
  • Text improvement for public communication, official writing, and journalism.

โš ๏ธ Limitations

  • Not intended for grammar correction or spell-checking.
  • May occasionally preserve non-stylistic errors if present in training data.
  • Performance is best on formal or semi-formal text.

๐Ÿ“Š Training and Evaluation Data

Training used a custom dataset uploaded to Hugging Face: Kulynych/training_data.
Each entry contains:

  • input_text: raw Ukrainian text (possibly containing calques or awkward phrasing).
  • target_text: human-edited version of the same paragraph, stylistically improved.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss Score Counts Totals Precisions Bp Sys Len Ref Len
0.2888 1.0 3129 0.2095 41.1095 [18518, 13411, 10404, 7739] [25905, 22776, 19652, 16594] [71.48426944605289, 58.88215665612926, 52.94117647058823, 46.6373387971556] 0.7240 25905 34270
0.2325 2.0 6258 0.2027 41.4271 [18650, 13567, 10522, 7822] [25663, 22534, 19416, 16463] [72.67271947940615, 60.206798615425576, 54.192418623815406, 47.51260402113831] 0.7151 25663 34270

Framework versions

  • Transformers 4.50.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1

Evaluation Metric

  • SacreBLEU score: 41.43 (after 2nd epoch)
  • Validation Loss: 0.2027
Epoch Step Val Loss SacreBLEU Bp Precisions (%)
1 3129 0.2095 41.11 0.7240 [71.48, 58.88, 52.94, 46.63]
2 6258 0.2027 41.43 0.7151 [72.67, 60.20, 54.19, 47.51]

๐Ÿ’ป How to Use

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("Kulynych/mt5-small-ukrainian-style-editor")
model = AutoModelForSeq2SeqLM.from_pretrained("Kulynych/mt5-small-ukrainian-style-editor")

text = "ะ—ะณั–ะดะฝะพ ะท ะดะฐะฝะธะผะธ, ะบะพั‚ั€ั– ะผะธ ะพั‚ั€ะธะผะฐะปะธ, ัะธั‚ัƒะฐั†ั–ั ะฟะพะณั–ั€ัˆะธะปะฐััŒ."
inputs = tokenizer(text, return_tensors="pt")
output = model.generate(**inputs, max_length=192)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Downloads last month
41
Safetensors
Model size
300M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Kulynych/mt5-small-ukrainian-style-editor

Base model

google/mt5-small
Finetuned
(497)
this model