Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

whisper-large-v2-ft-tms-good-and-bad-60-250504-v1

This model is a fine-tuned version of openai/whisper-large-v2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.0025

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 64
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.2
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
12.6652 1.0 1 12.6099
12.7284 2.0 2 12.6099
12.6979 3.0 3 12.6099
12.8604 4.0 4 12.6099
12.6964 5.0 5 12.6099
12.6169 6.0 6 12.6099
12.5936 7.0 7 12.6099
12.6436 8.0 8 12.6099
12.6892 9.0 9 12.4989
12.5564 10.0 10 12.1895
12.1348 11.0 11 11.6951
11.7845 12.0 12 11.0583
11.1069 13.0 13 10.2337
10.1252 14.0 14 9.1621
8.9713 15.0 15 7.9915
7.7297 16.0 16 6.8277
6.5998 17.0 17 5.5317
5.5248 18.0 18 4.9110
5.0819 19.0 19 4.7398
4.9123 20.0 20 4.6498
4.7977 21.0 21 4.5673
4.7755 22.0 22 4.4948
4.649 23.0 23 4.4172
4.6569 24.0 24 4.3416
4.5208 25.0 25 4.2622
4.5094 26.0 26 4.1828
4.3355 27.0 27 4.1015
4.209 28.0 28 4.0197
4.1589 29.0 29 3.9379
4.0804 30.0 30 3.8569
3.9805 31.0 31 3.7793
3.8472 32.0 32 3.7059
3.8284 33.0 33 3.6374
3.7731 34.0 34 3.5749
3.711 35.0 35 3.5172
3.5269 36.0 36 3.4646
3.4983 37.0 37 3.4161
3.5035 38.0 38 3.3697
3.3994 39.0 39 3.3267
3.2883 40.0 40 3.2856
3.2866 41.0 41 3.2497
3.2314 42.0 42 3.2149
3.1948 43.0 43 3.1819
3.1793 44.0 44 3.1509
3.0904 45.0 45 3.1212
3.1129 46.0 46 3.0937
3.1039 47.0 47 3.0681
2.9929 48.0 48 3.0441
3.0339 49.0 49 3.0224
2.9489 50.0 50 3.0025

Framework versions

  • PEFT 0.13.0
  • Transformers 4.45.1
  • Pytorch 2.5.0+cu124
  • Datasets 2.21.0
  • Tokenizers 0.20.0
Downloads last month
19
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for dylanewbie/whisper-large-v2-ft-tms-good-and-bad-60-250504-v1

Adapter
(248)
this model