davisrbr commited on
Commit
2c828d3
·
verified ·
1 Parent(s): 771ce62

Model save

Browse files
Files changed (2) hide show
  1. README.md +4 -4
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -6,14 +6,14 @@ library_name: peft
6
  tags:
7
  - generated_from_trainer
8
  model-index:
9
- - name: Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16-r16
10
  results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
- # Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16-r16
17
 
18
  This model is a fine-tuned version of [ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16](https://huggingface.co/ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16) on the red_pajama-data-1_t-sample dataset.
19
 
@@ -35,11 +35,11 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.0002
38
- - train_batch_size: 1
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - gradient_accumulation_steps: 16
42
- - total_train_batch_size: 16
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 200
 
6
  tags:
7
  - generated_from_trainer
8
  model-index:
9
+ - name: Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16-r8_bs4
10
  results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
+ # Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16-r8_bs4
17
 
18
  This model is a fine-tuned version of [ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16](https://huggingface.co/ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16) on the red_pajama-data-1_t-sample dataset.
19
 
 
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.0002
38
+ - train_batch_size: 4
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - gradient_accumulation_steps: 16
42
+ - total_train_batch_size: 64
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 200
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e08b94cd718e3b61f74ad50c6d14db95a6941cd76dde4c345c715255720d6198
3
  size 83945296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4105c8151e3a0b2030988d43b65baf62cb45b8a787696db58f22defb59546108
3
  size 83945296