maria15034's picture
End of training
80099fc verified
metadata
library_name: transformers
language:
  - en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
datasets:
  - Korean_english
metrics:
  - wer
model-index:
  - name: Whisper tiny Korean
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Korean English
          type: Korean_english
          args: 'config: default, split: test'
        metrics:
          - name: Wer
            type: wer
            value: 12.536585365853659

Whisper tiny Korean

This model is a fine-tuned version of openai/whisper-tiny on the Korean English dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3119
  • Wer: 12.5366

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000

Training results

Training Loss Epoch Step Validation Loss Wer
0.1156 1.0173 1000 0.2821 11.6829
0.0057 2.0346 2000 0.2956 12.0976
0.0065 3.0519 3000 0.3061 12.2195
0.0016 4.0692 4000 0.3077 12.3902
0.0022 5.0865 5000 0.3119 12.5366

Framework versions

  • Transformers 4.51.0.dev0
  • Pytorch 2.6.0+cu124
  • Datasets 3.4.1
  • Tokenizers 0.21.1