ikerion commited on
Commit
bb67109
·
verified ·
1 Parent(s): e76786f

Model save

Browse files
Files changed (2) hide show
  1. README.md +25 -11
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- library_name: peft
3
  license: apache-2.0
4
- base_model: mistralai/Mistral-7B-Instruct-v0.2
5
  tags:
6
  - generated_from_trainer
 
7
  model-index:
8
  - name: mistral-magyar-portas-lora
9
  results: []
@@ -15,6 +15,8 @@ should probably proofread and complete it, then remove this comment. -->
15
  # mistral-magyar-portas-lora
16
 
17
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
 
 
18
 
19
  ## Model description
20
 
@@ -33,22 +35,34 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 0.0001
37
  - train_batch_size: 2
38
  - eval_batch_size: 2
39
  - seed: 42
40
  - gradient_accumulation_steps: 4
41
  - total_train_batch_size: 8
42
- - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: cosine
44
  - lr_scheduler_warmup_steps: 100
45
- - num_epochs: 5
46
- - mixed_precision_training: Native AMP
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ### Framework versions
49
 
50
- - PEFT 0.15.2
51
- - Transformers 4.52.4
52
- - Pytorch 2.6.0+cu124
53
- - Datasets 2.14.4
54
- - Tokenizers 0.21.1
 
1
  ---
 
2
  license: apache-2.0
3
+ library_name: peft
4
  tags:
5
  - generated_from_trainer
6
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
7
  model-index:
8
  - name: mistral-magyar-portas-lora
9
  results: []
 
15
  # mistral-magyar-portas-lora
16
 
17
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.0117
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 0.0002
39
  - train_batch_size: 2
40
  - eval_batch_size: 2
41
  - seed: 42
42
  - gradient_accumulation_steps: 4
43
  - total_train_batch_size: 8
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_steps: 100
47
+ - num_epochs: 3
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:------:|:----:|:---------------:|
53
+ | 0.0323 | 0.3918 | 500 | 0.0634 |
54
+ | 0.0123 | 0.7837 | 1000 | 0.0131 |
55
+ | 0.0094 | 1.1755 | 1500 | 0.0121 |
56
+ | 0.0093 | 1.5674 | 2000 | 0.0120 |
57
+ | 0.0088 | 1.9592 | 2500 | 0.0111 |
58
+ | 0.0065 | 2.3511 | 3000 | 0.0119 |
59
+ | 0.0064 | 2.7429 | 3500 | 0.0117 |
60
+
61
 
62
  ### Framework versions
63
 
64
+ - PEFT 0.7.1
65
+ - Transformers 4.40.2
66
+ - Pytorch 2.1.1+cu121
67
+ - Datasets 2.16.1
68
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:abdb2545f9019d29f61988b8a09742663e67a63341580409226d2612798d0fc8
3
  size 335605144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0feff8954fcd1be4a316b874df88395f71c75cb719a3bcf6b84d441a3cabcf8e
3
  size 335605144