qwen2.5-1.5b-sft3-25-3
This model is a fine-tuned version of Qwen/Qwen2.5-1.5B on the hZzy/SFT_new_full2 dataset. It achieves the following results on the evaluation set:
- Loss: 2.1614
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 320
- total_eval_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.9732 | 0.2439 | 5 | 2.9550 |
2.9685 | 0.4878 | 10 | 2.9329 |
2.9341 | 0.7317 | 15 | 2.8866 |
2.8788 | 0.9756 | 20 | 2.8079 |
2.8082 | 1.2195 | 25 | 2.7484 |
2.7341 | 1.4634 | 30 | 2.6838 |
2.6784 | 1.7073 | 35 | 2.6335 |
2.6326 | 1.9512 | 40 | 2.5951 |
2.5934 | 2.1951 | 45 | 2.5594 |
2.5543 | 2.4390 | 50 | 2.5217 |
2.513 | 2.6829 | 55 | 2.4829 |
2.4712 | 2.9268 | 60 | 2.4461 |
2.4365 | 3.1707 | 65 | 2.4138 |
2.4066 | 3.4146 | 70 | 2.3859 |
2.375 | 3.6585 | 75 | 2.3606 |
2.3415 | 3.9024 | 80 | 2.3369 |
2.3225 | 4.1463 | 85 | 2.3143 |
2.2989 | 4.3902 | 90 | 2.2927 |
2.2748 | 4.6341 | 95 | 2.2732 |
2.2513 | 4.8780 | 100 | 2.2562 |
2.2401 | 5.1220 | 105 | 2.2412 |
2.2172 | 5.3659 | 110 | 2.2282 |
2.204 | 5.6098 | 115 | 2.2168 |
2.1893 | 5.8537 | 120 | 2.2069 |
2.1784 | 6.0976 | 125 | 2.1984 |
2.1646 | 6.3415 | 130 | 2.1914 |
2.1673 | 6.5854 | 135 | 2.1852 |
2.1555 | 6.8293 | 140 | 2.1801 |
2.1599 | 7.0732 | 145 | 2.1757 |
2.145 | 7.3171 | 150 | 2.1721 |
2.1359 | 7.5610 | 155 | 2.1692 |
2.1391 | 7.8049 | 160 | 2.1668 |
2.1274 | 8.0488 | 165 | 2.1650 |
2.1342 | 8.2927 | 170 | 2.1637 |
2.1272 | 8.5366 | 175 | 2.1627 |
2.133 | 8.7805 | 180 | 2.1621 |
2.1286 | 9.0244 | 185 | 2.1617 |
2.1296 | 9.2683 | 190 | 2.1615 |
2.1256 | 9.5122 | 195 | 2.1614 |
2.1267 | 9.7561 | 200 | 2.1614 |
Framework versions
- Transformers 4.42.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.19.1
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for hZzy/qwen2.5-1.5b-sft3-25-3
Base model
Qwen/Qwen2.5-1.5B