Aa123564 commited on
Commit
2ad145d
·
verified ·
1 Parent(s): 0aae1ef

Model save

Browse files
README.md CHANGED
@@ -2,13 +2,12 @@
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
5
- - alignment-handbook
6
  - trl
7
  - sft
8
  - generated_from_trainer
9
  base_model: mistralai/Mistral-7B-v0.1
10
  datasets:
11
- - HuggingFaceH4/ultrachat_200k
12
  model-index:
13
  - name: zephyr-7b-sft-qlora
14
  results: []
@@ -19,9 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  # zephyr-7b-sft-qlora
21
 
22
- This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.9543
25
 
26
  ## Model description
27
 
@@ -41,30 +40,29 @@ More information needed
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 0.0002
44
- - train_batch_size: 1
45
  - eval_batch_size: 8
46
  - seed: 42
47
  - distributed_type: multi-GPU
48
- - num_devices: 2
49
- - gradient_accumulation_steps: 2
50
- - total_train_batch_size: 4
51
- - total_eval_batch_size: 16
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: cosine
54
  - lr_scheduler_warmup_ratio: 0.1
55
- - num_epochs: 1.0
56
 
57
  ### Training results
58
 
59
- | Training Loss | Epoch | Step | Validation Loss |
60
- |:-------------:|:------:|:-----:|:---------------:|
61
- | 0.8484 | 1.0000 | 34856 | 0.9543 |
62
 
63
 
64
  ### Framework versions
65
 
66
  - PEFT 0.7.1
67
  - Transformers 4.41.0.dev0
68
- - Pytorch 2.0.1+cu117
69
  - Datasets 2.19.0
70
  - Tokenizers 0.19.1
 
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
  base_model: mistralai/Mistral-7B-v0.1
9
  datasets:
10
+ - generator
11
  model-index:
12
  - name: zephyr-7b-sft-qlora
13
  results: []
 
18
 
19
  # zephyr-7b-sft-qlora
20
 
21
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.9524
24
 
25
  ## Model description
26
 
 
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 0.0002
43
+ - train_batch_size: 4
44
  - eval_batch_size: 8
45
  - seed: 42
46
  - distributed_type: multi-GPU
47
+ - num_devices: 8
48
+ - total_train_batch_size: 32
49
+ - total_eval_batch_size: 64
 
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: cosine
52
  - lr_scheduler_warmup_ratio: 0.1
53
+ - num_epochs: 1
54
 
55
  ### Training results
56
 
57
+ | Training Loss | Epoch | Step | Validation Loss |
58
+ |:-------------:|:-----:|:-----:|:---------------:|
59
+ | 0.9467 | 1.0 | 17429 | 0.9524 |
60
 
61
 
62
  ### Framework versions
63
 
64
  - PEFT 0.7.1
65
  - Transformers 4.41.0.dev0
66
+ - Pytorch 2.1.2
67
  - Datasets 2.19.0
68
  - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2b554fb947e87aac90a5cbfaf43a38e0128aaf027266259ab1e31ab579766e02
3
  size 83946192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f72d92ea60154f7be85c721556b2f6e30aa243be84b80d01b626f9658cba8d51
3
  size 83946192
all_results.json CHANGED
@@ -1,14 +1,9 @@
1
  {
2
- "epoch": 0.9999856554731542,
3
- "eval_loss": 0.9543404579162598,
4
- "eval_runtime": 2086.5983,
5
- "eval_samples": 23109,
6
- "eval_samples_per_second": 7.395,
7
- "eval_steps_per_second": 0.462,
8
- "total_flos": 1.2236563132662678e+19,
9
- "train_loss": 0.9581741800145707,
10
- "train_runtime": 92834.4231,
11
  "train_samples": 207864,
12
- "train_samples_per_second": 1.502,
13
- "train_steps_per_second": 0.375
14
  }
 
1
  {
2
+ "epoch": 1.0,
3
+ "total_flos": 1.2254844500131709e+19,
4
+ "train_loss": 0.9352519262663926,
5
+ "train_runtime": 32149.2413,
 
 
 
 
 
6
  "train_samples": 207864,
7
+ "train_samples_per_second": 4.337,
8
+ "train_steps_per_second": 0.542
9
  }
runs/May03_23-45-29_p4d-24xl-us-west-2-st-p4d-24xl-us-west-2-5/events.out.tfevents.1714779964.p4d-24xl-us-west-2-st-p4d-24xl-us-west-2-5.46666.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bcdc1afcff7602768f5cd8ae0e7f4073a56ddf12e24ad0dfc7e74fa00c48e57f
3
- size 723618
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:222ad237d8ef8798ebc213ba6a6868c44228decf76b0a3f41ba8d6b760a2fe79
3
+ size 725329
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
- "epoch": 0.9999856554731542,
3
- "total_flos": 1.2236563132662678e+19,
4
- "train_loss": 0.9581741800145707,
5
- "train_runtime": 92834.4231,
6
  "train_samples": 207864,
7
- "train_samples_per_second": 1.502,
8
- "train_steps_per_second": 0.375
9
  }
 
1
  {
2
+ "epoch": 1.0,
3
+ "total_flos": 1.2254844500131709e+19,
4
+ "train_loss": 0.9352519262663926,
5
+ "train_runtime": 32149.2413,
6
  "train_samples": 207864,
7
+ "train_samples_per_second": 4.337,
8
+ "train_steps_per_second": 0.542
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff