lg-digits-classification

This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5405
  • Accuracy: 0.6087

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 4
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.9211 1.0 47 2.0718 0.2174
1.9034 2.0 94 2.0263 0.2174
1.8375 3.0 141 2.1374 0.2609
1.7839 4.0 188 2.0091 0.3043
1.6964 5.0 235 2.0135 0.2609
1.6485 6.0 282 1.9707 0.4348
1.5452 7.0 329 1.8629 0.3043
1.4662 8.0 376 1.8037 0.4348
1.3512 9.0 423 1.6884 0.5217
1.2567 10.0 470 1.7180 0.5652
1.1825 11.0 517 1.5855 0.5652
1.0535 12.0 564 1.6781 0.4783
0.9519 13.0 611 1.5681 0.5652
0.9136 14.0 658 1.4914 0.6087
0.7924 15.0 705 1.7370 0.5652
0.6775 16.0 752 1.6007 0.5652
0.7597 17.0 799 1.5242 0.6087
0.5731 18.0 846 1.7712 0.6087
0.5235 19.0 893 1.5915 0.5652
0.5463 20.0 940 1.7188 0.5652
0.4434 21.0 987 1.8435 0.5652
0.3921 22.0 1034 1.8180 0.6522
0.4408 23.0 1081 1.9594 0.5652
0.3323 24.0 1128 1.9446 0.5652
0.2806 25.0 1175 1.9466 0.5652
0.2228 26.0 1222 2.1430 0.5652
0.3144 27.0 1269 2.2630 0.5652
0.2841 28.0 1316 2.1039 0.5652
0.2392 29.0 1363 2.1887 0.6087
0.1964 30.0 1410 2.1597 0.6522
0.2014 31.0 1457 2.2470 0.6087
0.194 32.0 1504 2.3058 0.5652
0.202 33.0 1551 2.4327 0.5652
0.1968 34.0 1598 2.4571 0.5652
0.1675 35.0 1645 2.4031 0.6087
0.2261 36.0 1692 2.4036 0.6087
0.279 37.0 1739 2.4073 0.6087
0.2378 38.0 1786 2.4481 0.6087
0.2414 39.0 1833 2.4839 0.6087
0.189 40.0 1880 2.5730 0.5652
0.2018 41.0 1927 2.6159 0.5652
0.2418 42.0 1974 2.4725 0.5652
0.1489 43.0 2021 2.4596 0.5652
0.2277 44.0 2068 2.4913 0.6087
0.2313 45.0 2115 2.4922 0.6087
0.2333 46.0 2162 2.4951 0.6087
0.2201 47.0 2209 2.5213 0.6087
0.2116 48.0 2256 2.5164 0.6087
0.2122 49.0 2303 2.5424 0.6087
0.1965 50.0 2350 2.5405 0.6087

Framework versions

  • Transformers 4.56.2
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.22.1
Downloads last month
3
Safetensors
Model size
94.6M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for dmusingu/lg-digits-classification

Finetuned
(871)
this model