rayane-donni commited on
Commit
78ddc02
·
verified ·
1 Parent(s): e9982e2

Model save

Browse files
README.md CHANGED
@@ -2,13 +2,12 @@
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
5
- - alignment-handbook
6
  - trl
7
  - sft
8
  - generated_from_trainer
9
  base_model: mistralai/Mistral-7B-v0.1
10
  datasets:
11
- - HuggingFaceH4/ultrachat_200k
12
  model-index:
13
  - name: zephyr-7b-sft-qlora
14
  results: []
@@ -19,7 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  # zephyr-7b-sft-qlora
21
 
22
- This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
 
 
23
 
24
  ## Model description
25
 
@@ -40,20 +41,23 @@ More information needed
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0002
42
  - train_batch_size: 2
43
- - eval_batch_size: 4
44
  - seed: 42
45
  - distributed_type: multi-GPU
46
  - num_devices: 4
47
  - gradient_accumulation_steps: 4
48
  - total_train_batch_size: 32
49
- - total_eval_batch_size: 16
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: cosine
52
  - lr_scheduler_warmup_ratio: 0.1
53
- - num_epochs: 2
54
 
55
  ### Training results
56
 
 
 
 
57
 
58
 
59
  ### Framework versions
 
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
  base_model: mistralai/Mistral-7B-v0.1
9
  datasets:
10
+ - generator
11
  model-index:
12
  - name: zephyr-7b-sft-qlora
13
  results: []
 
18
 
19
  # zephyr-7b-sft-qlora
20
 
21
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.9584
24
 
25
  ## Model description
26
 
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 0.0002
43
  - train_batch_size: 2
44
+ - eval_batch_size: 2
45
  - seed: 42
46
  - distributed_type: multi-GPU
47
  - num_devices: 4
48
  - gradient_accumulation_steps: 4
49
  - total_train_batch_size: 32
50
+ - total_eval_batch_size: 8
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: cosine
53
  - lr_scheduler_warmup_ratio: 0.1
54
+ - num_epochs: 1
55
 
56
  ### Training results
57
 
58
+ | Training Loss | Epoch | Step | Validation Loss |
59
+ |:-------------:|:-----:|:----:|:---------------:|
60
+ | 0.98 | 1.0 | 870 | 0.9584 |
61
 
62
 
63
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8303e4d990e92e274c57096f600cbff3830f053daf12c7c229ac7308334634c2
3
  size 83946192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a8495a8724d32fa7528dafb12429bd1e044b1dca6ac8f7a6d14f160b6c5113e
3
  size 83946192
all_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "epoch": 0.17,
3
- "train_loss": 0.0,
4
- "train_runtime": 1.4376,
5
  "train_samples": 41573,
6
- "train_samples_per_second": 38742.113,
7
- "train_steps_per_second": 1210.343
8
  }
 
1
  {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.9710182381772446,
4
+ "train_runtime": 17696.8906,
5
  "train_samples": 41573,
6
+ "train_samples_per_second": 1.574,
7
+ "train_steps_per_second": 0.049
8
  }
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "epoch": 0.17,
3
- "train_loss": 0.0,
4
- "train_runtime": 1.4376,
5
  "train_samples": 41573,
6
- "train_samples_per_second": 38742.113,
7
- "train_steps_per_second": 1210.343
8
  }
 
1
  {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.9710182381772446,
4
+ "train_runtime": 17696.8906,
5
  "train_samples": 41573,
6
+ "train_samples_per_second": 1.574,
7
+ "train_steps_per_second": 0.049
8
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff