Amit65/whisper-small-multilingual
This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.6283
- Wer: 80.0691
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
full fine tuning on custom data and evaluate on word error rate(WER)
Training procedure
Apply full fine tuning using hugging face trainer API
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
3.4481 | 0.0480 | 25 | 1.7935 | 138.3641 |
1.494 | 0.0960 | 50 | 1.3053 | 105.6452 |
1.4092 | 0.1440 | 75 | 1.1546 | 102.6498 |
1.1367 | 0.1919 | 100 | 1.0424 | 105.4147 |
0.9748 | 0.2399 | 125 | 1.0038 | 116.7051 |
0.9522 | 0.2879 | 150 | 1.0032 | 140.6682 |
0.9114 | 0.3359 | 175 | 0.9329 | 126.2673 |
0.9498 | 0.3839 | 200 | 0.9077 | 117.0507 |
0.8762 | 0.4319 | 225 | 0.9359 | 97.4654 |
0.9051 | 0.4798 | 250 | 0.8390 | 88.5945 |
0.7941 | 0.5278 | 275 | 0.8869 | 105.2995 |
0.8417 | 0.5758 | 300 | 0.8299 | 109.7926 |
0.9244 | 0.6238 | 325 | 0.8105 | 79.9539 |
0.855 | 0.6718 | 350 | 0.7960 | 87.5576 |
0.7516 | 0.7198 | 375 | 0.7844 | 88.9401 |
0.9119 | 0.7678 | 400 | 0.8116 | 87.4424 |
0.7478 | 0.8157 | 425 | 0.7593 | 79.0323 |
0.7125 | 0.8637 | 450 | 0.7280 | 84.2166 |
0.8235 | 0.9117 | 475 | 0.7171 | 88.9401 |
0.6975 | 0.9597 | 500 | 0.7029 | 74.8848 |
0.5599 | 1.0077 | 525 | 0.7060 | 76.6129 |
0.4681 | 1.0557 | 550 | 0.6891 | 100.8065 |
0.3496 | 1.1036 | 575 | 0.6995 | 104.9539 |
0.4196 | 1.1516 | 600 | 0.7102 | 82.4885 |
0.3884 | 1.1996 | 625 | 0.6856 | 104.7235 |
0.4788 | 1.2476 | 650 | 0.6745 | 81.6820 |
0.4237 | 1.2956 | 675 | 0.6722 | 81.9124 |
0.4001 | 1.3436 | 700 | 0.6740 | 83.2949 |
0.3909 | 1.3916 | 725 | 0.6823 | 71.8894 |
0.3435 | 1.4395 | 750 | 0.6934 | 75.1152 |
0.344 | 1.4875 | 775 | 0.6810 | 72.0046 |
0.3071 | 1.5355 | 800 | 0.6704 | 71.1982 |
0.3392 | 1.5835 | 825 | 0.6589 | 88.3641 |
0.3742 | 1.6315 | 850 | 0.6532 | 77.9954 |
0.4153 | 1.6795 | 875 | 0.6363 | 79.8387 |
0.3416 | 1.7274 | 900 | 0.6560 | 79.4931 |
0.3121 | 1.7754 | 925 | 0.6320 | 82.0276 |
0.2986 | 1.8234 | 950 | 0.6447 | 76.9585 |
0.3761 | 1.8714 | 975 | 0.6420 | 75.8065 |
0.4394 | 1.9194 | 1000 | 0.6234 | 77.5346 |
0.3094 | 1.9674 | 1025 | 0.6430 | 81.5668 |
0.3468 | 2.0154 | 1050 | 0.6266 | 78.5714 |
0.25 | 2.0633 | 1075 | 0.6251 | 79.0323 |
0.1969 | 2.1113 | 1100 | 0.6337 | 81.2212 |
0.157 | 2.1593 | 1125 | 0.6367 | 76.8433 |
0.2118 | 2.2073 | 1150 | 0.6414 | 74.4240 |
0.2207 | 2.2553 | 1175 | 0.6345 | 77.4194 |
0.1965 | 2.3033 | 1200 | 0.6414 | 76.9585 |
0.1959 | 2.3512 | 1225 | 0.6322 | 79.6083 |
0.1668 | 2.3992 | 1250 | 0.6394 | 81.5668 |
0.2128 | 2.4472 | 1275 | 0.6361 | 80.4147 |
0.173 | 2.4952 | 1300 | 0.6322 | 74.8848 |
0.152 | 2.5432 | 1325 | 0.6312 | 73.3871 |
0.1897 | 2.5912 | 1350 | 0.6334 | 79.0323 |
0.1666 | 2.6392 | 1375 | 0.6339 | 81.1060 |
0.202 | 2.6871 | 1400 | 0.6283 | 77.9954 |
0.1511 | 2.7351 | 1425 | 0.6296 | 80.8756 |
0.1616 | 2.7831 | 1450 | 0.6313 | 80.4147 |
0.1482 | 2.8311 | 1475 | 0.6289 | 80.5300 |
0.1672 | 2.8791 | 1500 | 0.6283 | 80.0691 |
Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Amit65/whisper-small-multilingual
Base model
openai/whisper-small