deqing's picture
End of training
042c7dd verified
---
library_name: transformers
base_model: llama_small_config.json
tags:
- generated_from_trainer
model-index:
- name: llama-3.2-350M-fourier_arithmetic_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3.2-350M-fourier_arithmetic_dataset
This model is a fine-tuned version of [llama_small_config.json](https://huggingface.co/llama_small_config.json) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- total_eval_batch_size: 2
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8493 | 0.1066 | 1000 | 1.8628 |
| 1.8654 | 0.2132 | 2000 | 1.8692 |
| 1.8328 | 0.3197 | 3000 | 1.8328 |
| 1.7287 | 0.4263 | 4000 | 1.7136 |
| 1.6856 | 0.5329 | 5000 | 1.6816 |
| 1.65 | 0.6395 | 6000 | 1.6494 |
| 1.6304 | 0.7460 | 7000 | 1.6308 |
| 1.6071 | 0.8526 | 8000 | 1.6119 |
| 1.6022 | 0.9592 | 9000 | 1.6047 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.3.1+cu118
- Datasets 3.2.0
- Tokenizers 0.21.0