lmind_nq_train600_eval300_v1_docidx_gpt2-xl
This model is a fine-tuned version of gpt2-xl on the tyzhu/lmind_nq_train600_eval300_v1_docidx dataset. It achieves the following results on the evaluation set:
- Loss: 0.3697
- Accuracy: 0.8462
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 10.0
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
2.4931 | 0.5 | 28 | 2.3904 | 0.5749 |
2.4699 | 1.0 | 56 | 2.1014 | 0.6050 |
1.9006 | 1.5 | 84 | 1.8079 | 0.6393 |
1.9317 | 2.0 | 112 | 1.5510 | 0.6722 |
1.3984 | 2.5 | 140 | 1.2850 | 0.7075 |
1.3662 | 3.0 | 168 | 1.0900 | 0.7374 |
0.9041 | 3.5 | 196 | 0.8909 | 0.7670 |
0.9056 | 4.0 | 224 | 0.7502 | 0.7903 |
0.6131 | 4.5 | 252 | 0.6304 | 0.8067 |
0.6101 | 5.0 | 280 | 0.5429 | 0.8215 |
0.3772 | 5.5 | 308 | 0.4872 | 0.8287 |
0.4265 | 6.0 | 336 | 0.4437 | 0.8357 |
0.2552 | 6.5 | 364 | 0.4226 | 0.8389 |
0.2875 | 7.0 | 392 | 0.4019 | 0.8418 |
0.1874 | 7.5 | 420 | 0.3965 | 0.8430 |
0.1958 | 8.0 | 448 | 0.3812 | 0.8441 |
0.1443 | 8.5 | 476 | 0.3852 | 0.8450 |
0.1535 | 9.0 | 504 | 0.3791 | 0.8456 |
0.1236 | 9.5 | 532 | 0.3849 | 0.8456 |
0.1221 | 10.0 | 560 | 0.3697 | 0.8462 |
Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for tyzhu/lmind_nq_train600_eval300_v1_docidx_gpt2-xl
Base model
openai-community/gpt2-xlDataset used to train tyzhu/lmind_nq_train600_eval300_v1_docidx_gpt2-xl
Evaluation results
- Accuracy on tyzhu/lmind_nq_train600_eval300_v1_docidxself-reported0.846