Aivesa commited on
Commit
0ca3d9b
·
verified ·
1 Parent(s): b7779f0

End of training

Browse files
Files changed (1) hide show
  1. README.md +15 -16
README.md CHANGED
@@ -1,14 +1,13 @@
1
  ---
2
  library_name: peft
3
- license: apache-2.0
4
- base_model: unsloth/SmolLM2-360M-Instruct
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  datasets:
9
- - Aivesa/dataset_b5e7bccd-18f9-46c0-969e-ae67a7379d19
10
  model-index:
11
- - name: 983e8f34-21d7-4a69-a6f8-51cacd900a19
12
  results: []
13
  ---
14
 
@@ -21,17 +20,17 @@ should probably proofread and complete it, then remove this comment. -->
21
  axolotl version: `0.6.0`
22
  ```yaml
23
  adapter: lora
24
- base_model: unsloth/SmolLM2-360M-Instruct
25
  bf16: auto
26
  chat_template: llama3
27
  dataset_prepared_path: /workspace/axolotl/data/prepared
28
  datasets:
29
  - ds_type: json
30
  format: custom
31
- path: Aivesa/dataset_b5e7bccd-18f9-46c0-969e-ae67a7379d19
32
  type:
33
  field_input: input
34
- field_instruction: prompt
35
  field_output: output
36
  system_format: '{system}'
37
  system_prompt: ''
@@ -48,7 +47,7 @@ fsdp_config: null
48
  gradient_accumulation_steps: 4
49
  gradient_checkpointing: false
50
  group_by_length: false
51
- hub_model_id: Aivesa/983e8f34-21d7-4a69-a6f8-51cacd900a19
52
  hub_private_repo: true
53
  hub_repo: null
54
  hub_strategy: checkpoint
@@ -88,10 +87,10 @@ use_accelerate: true
88
  val_set_size: 0.05
89
  wandb_entity: null
90
  wandb_mode: online
91
- wandb_name: b5e7bccd-18f9-46c0-969e-ae67a7379d19
92
  wandb_project: Gradients-On-Demand
93
  wandb_run: your_name
94
- wandb_runid: b5e7bccd-18f9-46c0-969e-ae67a7379d19
95
  warmup_steps: 10
96
  weight_decay: 0.0
97
  xformers_attention: null
@@ -100,11 +99,11 @@ xformers_attention: null
100
 
101
  </details><br>
102
 
103
- # 983e8f34-21d7-4a69-a6f8-51cacd900a19
104
 
105
- This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the Aivesa/dataset_b5e7bccd-18f9-46c0-969e-ae67a7379d19 dataset.
106
  It achieves the following results on the evaluation set:
107
- - Loss: 1.2910
108
 
109
  ## Model description
110
 
@@ -138,9 +137,9 @@ The following hyperparameters were used during training:
138
 
139
  | Training Loss | Epoch | Step | Validation Loss |
140
  |:-------------:|:------:|:----:|:---------------:|
141
- | 1.2887 | 0.0104 | 3 | 1.3102 |
142
- | 1.4285 | 0.0209 | 6 | 1.3062 |
143
- | 1.3236 | 0.0313 | 9 | 1.2910 |
144
 
145
 
146
  ### Framework versions
 
1
  ---
2
  library_name: peft
3
+ base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
 
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
  datasets:
8
+ - Aivesa/dataset_fab57f05-e769-4df2-8d2b-87afad7cfe22
9
  model-index:
10
+ - name: 0b98a095-60c2-45a0-901e-ac229ae52df0
11
  results: []
12
  ---
13
 
 
20
  axolotl version: `0.6.0`
21
  ```yaml
22
  adapter: lora
23
+ base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
24
  bf16: auto
25
  chat_template: llama3
26
  dataset_prepared_path: /workspace/axolotl/data/prepared
27
  datasets:
28
  - ds_type: json
29
  format: custom
30
+ path: Aivesa/dataset_fab57f05-e769-4df2-8d2b-87afad7cfe22
31
  type:
32
  field_input: input
33
+ field_instruction: instruction
34
  field_output: output
35
  system_format: '{system}'
36
  system_prompt: ''
 
47
  gradient_accumulation_steps: 4
48
  gradient_checkpointing: false
49
  group_by_length: false
50
+ hub_model_id: Aivesa/0b98a095-60c2-45a0-901e-ac229ae52df0
51
  hub_private_repo: true
52
  hub_repo: null
53
  hub_strategy: checkpoint
 
87
  val_set_size: 0.05
88
  wandb_entity: null
89
  wandb_mode: online
90
+ wandb_name: fab57f05-e769-4df2-8d2b-87afad7cfe22
91
  wandb_project: Gradients-On-Demand
92
  wandb_run: your_name
93
+ wandb_runid: fab57f05-e769-4df2-8d2b-87afad7cfe22
94
  warmup_steps: 10
95
  weight_decay: 0.0
96
  xformers_attention: null
 
99
 
100
  </details><br>
101
 
102
+ # 0b98a095-60c2-45a0-901e-ac229ae52df0
103
 
104
+ This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the Aivesa/dataset_fab57f05-e769-4df2-8d2b-87afad7cfe22 dataset.
105
  It achieves the following results on the evaluation set:
106
+ - Loss: 0.9962
107
 
108
  ## Model description
109
 
 
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
+ | 0.7196 | 0.0002 | 3 | 1.1272 |
141
+ | 1.1478 | 0.0003 | 6 | 1.0565 |
142
+ | 0.5412 | 0.0005 | 9 | 0.9962 |
143
 
144
 
145
  ### Framework versions