wav2vec2-xls-r-300m-fr-30m

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7387
  • Wer: 0.3945
  • Cer: 0.1314

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 7e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
3.0009 17.8571 500 2.9972 1.0 1.0
2.7032 35.7143 1000 2.4955 0.9906 0.8773
1.123 53.5714 1500 0.8270 0.5961 0.1858
0.7079 71.4286 2000 0.6459 0.4674 0.1450
0.496 89.2857 2500 0.6496 0.4271 0.1385
0.4189 107.1429 3000 0.6491 0.4134 0.1335
0.3499 125.0 3500 0.7011 0.4022 0.1332
0.3269 142.8571 4000 0.7218 0.3997 0.1324
0.2987 160.7143 4500 0.7287 0.3868 0.1277
0.2957 178.5714 5000 0.7387 0.3945 0.1314

Framework versions

  • Transformers 4.50.0
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
10
Safetensors
Model size
315M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Rateddany/wav2vec2-xls-r-300m-fr-30m

Finetuned
(677)
this model