Whisper Small UZ - Link Data

This model is a fine-tuned version of openai/whisper-small on the Tashkent Dialects Small dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5056
  • Wer: 41.7576

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.4746 1.2210 1000 0.5591 48.2297
0.3081 2.4420 2000 0.4815 42.7295
0.2342 3.6630 3000 0.4749 41.7905
0.1822 4.8840 4000 0.4863 41.4266
0.1224 6.1050 5000 0.5056 41.7576

Framework versions

  • Transformers 4.57.0
  • Pytorch 2.8.0+cu126
  • Datasets 4.1.1
  • Tokenizers 0.22.1
Downloads last month
42
Safetensors
Model size
242M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for IbratDO/whisper-small-linkdata-tashkent-dialect

Finetuned
(2965)
this model

Evaluation results