--- license: gemma base_model: google/gemma-2-2b tags: - trl - sft - generated_from_trainer model-index: - name: collapse_gemma-2-2b_hs2_replace_iter13_sftsd0 results: [] --- # collapse_gemma-2-2b_hs2_replace_iter13_sftsd0 This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5548 - Num Input Tokens Seen: 4774568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 8 - eval_batch_size: 16 - seed: 0 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:-----------------:| | No log | 0 | 0 | 1.3909 | 0 | | 1.4158 | 0.0511 | 5 | 1.2853 | 256896 | | 0.6749 | 0.1021 | 10 | 1.3498 | 518344 | | 0.4718 | 0.1532 | 15 | 1.5553 | 765600 | | 0.2803 | 0.2042 | 20 | 1.8024 | 1008792 | | 0.0977 | 0.2553 | 25 | 2.0325 | 1254400 | | 0.0584 | 0.3063 | 30 | 2.3093 | 1495152 | | 0.033 | 0.3574 | 35 | 2.3717 | 1729824 | | 0.0354 | 0.4084 | 40 | 2.4933 | 1973368 | | 0.0287 | 0.4595 | 45 | 2.5809 | 2215232 | | 0.025 | 0.5105 | 50 | 2.5574 | 2460080 | | 0.0243 | 0.5616 | 55 | 2.5655 | 2710264 | | 0.0257 | 0.6126 | 60 | 2.5953 | 2951568 | | 0.0209 | 0.6637 | 65 | 2.5973 | 3212176 | | 0.022 | 0.7147 | 70 | 2.5813 | 3451504 | | 0.0235 | 0.7658 | 75 | 2.5753 | 3699880 | | 0.0247 | 0.8168 | 80 | 2.5757 | 3944104 | | 0.0226 | 0.8679 | 85 | 2.5800 | 4192928 | | 0.0232 | 0.9190 | 90 | 2.5649 | 4436328 | | 0.0227 | 0.9700 | 95 | 2.5695 | 4674672 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1