final_databricks_databricks-dolly-15k
This model is a fine-tuned version of Qwen/Qwen-14B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.6083
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 132
- total_train_batch_size: 264
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.01
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.694 | 0.04 | 2 | 1.8016 |
1.6398 | 0.07 | 4 | 1.7369 |
1.6421 | 0.11 | 6 | 1.6886 |
1.579 | 0.15 | 8 | 1.6596 |
1.5589 | 0.18 | 10 | 1.6420 |
1.5944 | 0.22 | 12 | 1.6305 |
1.5314 | 0.26 | 14 | 1.6274 |
1.5841 | 0.29 | 16 | 1.6238 |
1.5945 | 0.33 | 18 | 1.6229 |
1.5755 | 0.37 | 20 | 1.6234 |
1.5527 | 0.4 | 22 | 1.6231 |
1.6121 | 0.44 | 24 | 1.6224 |
1.586 | 0.48 | 26 | 1.6219 |
1.5995 | 0.52 | 28 | 1.6213 |
1.5942 | 0.55 | 30 | 1.6200 |
1.5738 | 0.59 | 32 | 1.6180 |
1.5825 | 0.63 | 34 | 1.6161 |
1.5183 | 0.66 | 36 | 1.6137 |
1.5964 | 0.7 | 38 | 1.6120 |
1.623 | 0.74 | 40 | 1.6105 |
1.5783 | 0.77 | 42 | 1.6098 |
1.6046 | 0.81 | 44 | 1.6093 |
1.5157 | 0.85 | 46 | 1.6088 |
1.5317 | 0.88 | 48 | 1.6086 |
1.5578 | 0.92 | 50 | 1.6086 |
1.5402 | 0.96 | 52 | 1.6084 |
1.5616 | 0.99 | 54 | 1.6083 |
Framework versions
- Transformers 4.32.0
- Pytorch 2.1.0
- Datasets 2.14.7
- Tokenizers 0.13.3
Model tree for imdatta0/qwen_databricks_databricks-dolly-15k
Base model
Qwen/Qwen-14B