sercetexam9 commited on
Commit
cd2d0e3
·
verified ·
1 Parent(s): e7154be

Model save

Browse files
Files changed (2) hide show
  1. README.md +72 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: Davlan/afro-xlmr-large-76L
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - f1
9
+ - accuracy
10
+ model-index:
11
+ - name: cs221-afro-xlmr-large-76L-pcm-finetuned-10-epochs
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # cs221-afro-xlmr-large-76L-pcm-finetuned-10-epochs
19
+
20
+ This model is a fine-tuned version of [Davlan/afro-xlmr-large-76L](https://huggingface.co/Davlan/afro-xlmr-large-76L) on the None dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.4041
23
+ - F1: 0.6316
24
+ - Roc Auc: 0.7539
25
+ - Accuracy: 0.3525
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 2e-05
45
+ - train_batch_size: 32
46
+ - eval_batch_size: 32
47
+ - seed: 42
48
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
+ - lr_scheduler_type: cosine
50
+ - lr_scheduler_warmup_steps: 100
51
+ - num_epochs: 10
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
56
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
57
+ | 0.4949 | 1.0 | 94 | 0.4847 | 0.4327 | 0.6336 | 0.2185 |
58
+ | 0.4129 | 2.0 | 188 | 0.3764 | 0.5659 | 0.7068 | 0.2949 |
59
+ | 0.3874 | 3.0 | 282 | 0.3892 | 0.6153 | 0.7428 | 0.3257 |
60
+ | 0.3357 | 4.0 | 376 | 0.3524 | 0.6229 | 0.7445 | 0.3512 |
61
+ | 0.2843 | 5.0 | 470 | 0.3586 | 0.6422 | 0.7578 | 0.3861 |
62
+ | 0.2229 | 6.0 | 564 | 0.3846 | 0.6287 | 0.7544 | 0.3445 |
63
+ | 0.2028 | 7.0 | 658 | 0.3966 | 0.6240 | 0.7484 | 0.3592 |
64
+ | 0.1728 | 8.0 | 752 | 0.4041 | 0.6316 | 0.7539 | 0.3525 |
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - Transformers 4.48.0
70
+ - Pytorch 2.5.1+cu121
71
+ - Datasets 3.2.0
72
+ - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6942a1b198d2cb802327907824b4a304e51e98ba9c9323deab116365fdd76156
3
  size 2239635072
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:471c6c8c47f6e24f9354ae41a7648590839eb0d9aa9ee2dce8e9046417f09fe4
3
  size 2239635072