Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Visualize in Weights & Biases

Whisper Large Ru ORD 0.9 Peft PEFT 4-bit Q DoRA - Mizoru

This model is a fine-tuned version of openai/whisper-large-v2 on the ORD_0.9synth dataset. It achieves the following results on the evaluation set:

  • Loss: 2.9191
  • Wer: 69.3030
  • Cer: 36.7586
  • Clean Wer: 38.4918
  • Clean Cer: 23.1425

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer Clean Wer Clean Cer
1.1554 1.0 613 2.6287 77.3618 52.7094 55.4565 41.9807
1.0513 2.0 1226 2.7804 73.4258 41.9948 46.7922 29.5494
0.9911 3.0 1839 2.7527 72.0139 39.5863 43.3650 26.3904
0.751 4.0 2452 2.9191 69.3030 36.7586 38.4918 23.1425

Framework versions

  • PEFT 0.12.0
  • Transformers 4.41.0.dev0
  • Pytorch 2.3.1
  • Datasets 3.2.0
  • Tokenizers 0.19.1
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mizoru/whisper-large-ru-synth_0.1a_peft

Adapter
(267)
this model