en_to_dzo_nllb_mul_mt_nlp_m4
This model is a fine-tuned version of facebook/nllb-200-distilled-600M on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.7293
- Bleu: 19.2074
- Gen Len: 15.6777
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 9
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
---|---|---|---|---|---|
2.2141 | 1.0 | 562 | 1.8532 | 9.1051 | 16.0711 |
1.7479 | 2.0 | 1124 | 1.7386 | 10.4502 | 15.3854 |
1.501 | 3.0 | 1686 | 1.6975 | 18.3404 | 16.2342 |
1.3186 | 4.0 | 2248 | 1.6957 | 17.8507 | 15.6396 |
1.1776 | 5.0 | 2810 | 1.6957 | 17.353 | 15.4945 |
1.0629 | 6.0 | 3372 | 1.7076 | 17.9624 | 15.6937 |
0.9827 | 7.0 | 3934 | 1.7202 | 17.6422 | 15.8619 |
0.9099 | 8.0 | 4496 | 1.7254 | 19.1543 | 15.5365 |
0.8253 | 9.0 | 5058 | 1.7293 | 19.2074 | 15.6777 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 62
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Zeref02210217-cst/en_to_dzo_nllb_mul_mt_nlp_m4
Base model
facebook/nllb-200-distilled-600M