speecht5_finetuned_voxpopuli_nl

This model was trained from scratch on the voxpopuli dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4590

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.6863 0.8607 200 0.6124
0.5721 1.7230 400 0.5167
0.5396 2.5853 600 0.4984
0.5289 3.4476 800 0.4868
0.5172 4.3098 1000 0.4815
0.5169 5.1721 1200 0.4771
0.5108 6.0344 1400 0.4740
0.5086 6.8951 1600 0.4715
0.5042 7.7574 1800 0.4699
0.4939 8.6197 2000 0.4678
0.4965 9.4820 2200 0.4667
0.5004 10.3443 2400 0.4644
0.4906 11.2066 2600 0.4617
0.4889 12.0689 2800 0.4612
0.493 12.9295 3000 0.4601
0.4893 13.7918 3200 0.4599
0.4894 14.6541 3400 0.4600
0.4922 15.5164 3600 0.4594
0.491 16.3787 3800 0.4599
0.482 17.2410 4000 0.4590

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
25
Safetensors
Model size
144M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support