Wav2vec2-fula-567

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the LEONEL-MAIA/FULFULDE-41 - DEFAULT dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0718
  • Wer: 0.5262

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 60.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.5177 0.3855 500 0.3770 0.6746
0.3171 0.7710 1000 0.1915 0.5956
0.2224 1.1565 1500 0.1434 0.5711
0.2386 1.5420 2000 0.1338 0.5773
0.1917 1.9275 2500 0.1108 0.5524
0.1726 2.3130 3000 0.1076 0.5468
0.1608 2.6985 3500 0.0950 0.5438
0.1313 3.0840 4000 0.0903 0.5342
0.1376 3.4695 4500 0.0901 0.5404
0.1371 3.8551 5000 0.0906 0.5366
0.1298 4.2406 5500 0.0913 0.5400
0.1054 4.6261 6000 0.0818 0.5315
0.106 5.0116 6500 0.0820 0.5314
0.1543 5.3971 7000 0.0792 0.5367
0.115 5.7826 7500 0.0849 0.5329
0.1077 6.1681 8000 0.0810 0.5289
0.101 6.5536 8500 0.0746 0.5270
0.0948 6.9391 9000 0.0767 0.5282
0.102 7.3246 9500 0.0718 0.5263
0.0816 7.7101 10000 0.0721 0.5236
0.0942 8.0956 10500 0.0789 0.5272
0.0902 8.4811 11000 0.0764 0.5266

Framework versions

  • Transformers 4.50.3
  • Pytorch 2.7.0+cu126
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
88
Safetensors
Model size
315M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Leonel-Maia/Wav2vec2-fula-567

Finetuned
(676)
this model