SystemAdmin123 commited on
Commit
94e9c12
·
verified ·
1 Parent(s): a87a628

End of training

Browse files
Files changed (2) hide show
  1. README.md +131 -0
  2. generation_config.json +8 -0
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ datasets:
8
+ - argilla/databricks-dolly-15k-curated-en
9
+ model-index:
10
+ - name: tiny-random-LlamaForCausalLMeb1f456b-f90f-49a6-8608-d32eb5cd2470
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
18
+ <details><summary>See axolotl config</summary>
19
+
20
+ axolotl version: `0.6.0`
21
+ ```yaml
22
+ base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
23
+ batch_size: 128
24
+ bf16: true
25
+ chat_template: tokenizer_default_fallback_alpaca
26
+ datasets:
27
+ - format: custom
28
+ path: argilla/databricks-dolly-15k-curated-en
29
+ type:
30
+ field_input: original-instruction
31
+ field_instruction: original-instruction
32
+ field_output: original-response
33
+ format: '{instruction} {input}'
34
+ no_input_format: '{instruction}'
35
+ system_format: '{system}'
36
+ system_prompt: ''
37
+ device_map: auto
38
+ eval_sample_packing: false
39
+ eval_steps: 20
40
+ flash_attention: true
41
+ gradient_checkpointing: true
42
+ group_by_length: true
43
+ hub_model_id: SystemAdmin123/tiny-random-LlamaForCausalLMeb1f456b-f90f-49a6-8608-d32eb5cd2470
44
+ hub_strategy: checkpoint
45
+ learning_rate: 0.0002
46
+ logging_steps: 10
47
+ lr_scheduler: cosine
48
+ max_steps: 10000
49
+ micro_batch_size: 32
50
+ model_type: AutoModelForCausalLM
51
+ num_epochs: 100
52
+ optimizer: adamw_bnb_8bit
53
+ output_dir: /root/.sn56/axolotl/tmp/tiny-random-LlamaForCausalLMeb1f456b-f90f-49a6-8608-d32eb5cd2470
54
+ pad_to_sequence_len: true
55
+ resize_token_embeddings_to_32x: false
56
+ sample_packing: true
57
+ save_steps: 20
58
+ save_total_limit: 1
59
+ sequence_len: 2048
60
+ tokenizer_type: LlamaTokenizerFast
61
+ torch_dtype: bf16
62
+ training_args_kwargs:
63
+ hub_private_repo: true
64
+ trust_remote_code: true
65
+ val_set_size: 0.1
66
+ wandb_entity: ''
67
+ wandb_mode: online
68
+ wandb_name: trl-internal-testing/tiny-random-LlamaForCausalLM-argilla/databricks-dolly-15k-curated-en
69
+ wandb_project: Gradients-On-Demand
70
+ wandb_run: your_name
71
+ wandb_runid: default
72
+ warmup_ratio: 0.05
73
+
74
+ ```
75
+
76
+ </details><br>
77
+
78
+ # tiny-random-LlamaForCausalLMeb1f456b-f90f-49a6-8608-d32eb5cd2470
79
+
80
+ This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the argilla/databricks-dolly-15k-curated-en dataset.
81
+ It achieves the following results on the evaluation set:
82
+ - Loss: 10.1817
83
+
84
+ ## Model description
85
+
86
+ More information needed
87
+
88
+ ## Intended uses & limitations
89
+
90
+ More information needed
91
+
92
+ ## Training and evaluation data
93
+
94
+ More information needed
95
+
96
+ ## Training procedure
97
+
98
+ ### Training hyperparameters
99
+
100
+ The following hyperparameters were used during training:
101
+ - learning_rate: 0.0002
102
+ - train_batch_size: 32
103
+ - eval_batch_size: 32
104
+ - seed: 42
105
+ - distributed_type: multi-GPU
106
+ - num_devices: 4
107
+ - total_train_batch_size: 128
108
+ - total_eval_batch_size: 128
109
+ - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
110
+ - lr_scheduler_type: cosine
111
+ - lr_scheduler_warmup_steps: 5
112
+ - training_steps: 100
113
+
114
+ ### Training results
115
+
116
+ | Training Loss | Epoch | Step | Validation Loss |
117
+ |:-------------:|:-------:|:----:|:---------------:|
118
+ | No log | 0.1667 | 1 | 10.3764 |
119
+ | 10.3632 | 3.3333 | 20 | 10.3538 |
120
+ | 10.3073 | 6.6667 | 40 | 10.2840 |
121
+ | 10.2203 | 10.0 | 60 | 10.2082 |
122
+ | 10.1812 | 13.3333 | 80 | 10.1828 |
123
+ | 10.1767 | 16.6667 | 100 | 10.1817 |
124
+
125
+
126
+ ### Framework versions
127
+
128
+ - Transformers 4.48.1
129
+ - Pytorch 2.5.1+cu124
130
+ - Datasets 3.2.0
131
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "do_sample": true,
5
+ "eos_token_id": 1,
6
+ "pad_token_id": 2,
7
+ "transformers_version": "4.48.1"
8
+ }