game_music_gen_V06
This model is a fine-tuned version of gpt2 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.9341
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 4
- seed: 1
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.3569 | 2.7027 | 500 | 1.8166 |
1.623 | 5.4054 | 1000 | 1.5719 |
1.3792 | 8.1081 | 1500 | 1.4346 |
1.2156 | 10.8108 | 2000 | 1.3726 |
1.0928 | 13.5135 | 2500 | 1.2929 |
0.9815 | 16.2162 | 3000 | 1.2165 |
0.8874 | 18.9189 | 3500 | 1.1643 |
0.7956 | 21.6216 | 4000 | 1.1065 |
0.7028 | 24.3243 | 4500 | 1.0516 |
0.6261 | 27.0270 | 5000 | 1.0120 |
0.5457 | 29.7297 | 5500 | 0.9647 |
0.4771 | 32.4324 | 6000 | 0.9561 |
0.4184 | 35.1351 | 6500 | 0.9345 |
0.3755 | 37.8378 | 7000 | 0.9279 |
0.3372 | 40.5405 | 7500 | 0.9354 |
0.3182 | 43.2432 | 8000 | 0.9319 |
0.3011 | 45.9459 | 8500 | 0.9352 |
0.2972 | 48.6486 | 9000 | 0.9341 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 9
Model tree for MissAvery/game_music_gen_V06
Base model
openai-community/gpt2