wav2vec_on_grid
This model is a fine-tuned version of facebook/wav2vec2-base-960h on the None dataset. It achieves the following results on the evaluation set:
- Loss: -0.3810
- Wer: 0.1345
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
3.1549 | 2.86 | 500 | 0.5346 | 0.5909 |
0.4979 | 5.71 | 1000 | 0.0507 | 0.4645 |
0.0943 | 8.57 | 1500 | -0.1675 | 0.3417 |
-0.1114 | 11.43 | 2000 | -0.3079 | 0.2426 |
-0.21 | 14.29 | 2500 | -0.3226 | 0.2126 |
-0.2397 | 17.14 | 3000 | -0.3238 | 0.2052 |
-0.2884 | 20.0 | 3500 | -0.3635 | 0.1694 |
-0.307 | 22.86 | 4000 | -0.3726 | 0.1540 |
-0.3311 | 25.71 | 4500 | -0.3647 | 0.1520 |
-0.348 | 28.57 | 5000 | -0.3889 | 0.1362 |
-0.3596 | 31.43 | 5500 | -0.3726 | 0.1467 |
-0.366 | 34.29 | 6000 | -0.3865 | 0.1364 |
-0.3758 | 37.14 | 6500 | -0.3841 | 0.1398 |
-0.3876 | 40.0 | 7000 | -0.3451 | 0.1485 |
-0.3983 | 42.86 | 7500 | -0.3905 | 0.1350 |
-0.4015 | 45.71 | 8000 | -0.3834 | 0.1362 |
-0.4067 | 48.57 | 8500 | -0.3810 | 0.1345 |
Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support