deb101 commited on
Commit
67f403b
·
verified ·
1 Parent(s): 5b3f130

Model save

Browse files
Files changed (3) hide show
  1. README.md +8 -6
  2. adapter_model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -18,10 +18,10 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [ministral/Ministral-3b-instruct](https://huggingface.co/ministral/Ministral-3b-instruct) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 2.6892
22
- - Model Preparation Time: 0.0125
23
- - Accuracy: 0.5202
24
- - Perplexity: 14.7202
25
 
26
  ## Model description
27
 
@@ -48,14 +48,16 @@ The following hyperparameters were used during training:
48
  - total_train_batch_size: 4
49
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
  - lr_scheduler_type: linear
51
- - num_epochs: 1
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Training results
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy | Perplexity |
57
  |:-------------:|:------:|:-----:|:---------------:|:----------------------:|:--------:|:----------:|
58
- | 2.8172 | 1.0000 | 30564 | 2.6892 | 0.0125 | 0.5202 | 14.7202 |
 
 
59
 
60
 
61
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [ministral/Ministral-3b-instruct](https://huggingface.co/ministral/Ministral-3b-instruct) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 2.2631
22
+ - Model Preparation Time: 0.0126
23
+ - Accuracy: 0.5672
24
+ - Perplexity: 9.6125
25
 
26
  ## Model description
27
 
 
48
  - total_train_batch_size: 4
49
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
  - lr_scheduler_type: linear
51
+ - num_epochs: 3
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Training results
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy | Perplexity |
57
  |:-------------:|:------:|:-----:|:---------------:|:----------------------:|:--------:|:----------:|
58
+ | 2.3634 | 1.0 | 30565 | 2.3471 | 0.0126 | 0.5535 | 10.4554 |
59
+ | 2.381 | 2.0 | 61130 | 2.2835 | 0.0126 | 0.5619 | 9.8107 |
60
+ | 2.3067 | 2.9999 | 91692 | 2.2631 | 0.0126 | 0.5672 | 9.6125 |
61
 
62
 
63
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c88f9cc9a98b2977689f1c3a7463ad238e06eb2303ad3bd4e8df78b8e8c5f941
3
  size 11935008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bc69c1c2ce8b7887e34b511d0fcbd825f5454a5b3b2aa114fafb50999c76ee3
3
  size 11935008
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:33c132f516d11637882cd935e90238237d00ed8e00f75f31b5c9c8639b8bf23f
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d89d1d2162cf22cfc9742266df0a7957fcd54c7d01b3d3773e949b56a3ac7287
3
  size 5432