kimlong22 commited on
Commit
6bd70d6
·
verified ·
1 Parent(s): 9e1bd87

Model save

Browse files
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google-bert/bert-base-multilingual-cased
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - precision
8
+ - recall
9
+ model-index:
10
+ - name: lex-cross-encoder-mbert-10neg
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # lex-cross-encoder-mbert-10neg
18
+
19
+ This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.4360
22
+ - Precision: 0.6020
23
+ - Recall: 0.8593
24
+ - F2: 0.7917
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 1e-05
44
+ - train_batch_size: 16
45
+ - eval_batch_size: 16
46
+ - seed: 42
47
+ - distributed_type: multi-GPU
48
+ - num_devices: 8
49
+ - total_train_batch_size: 128
50
+ - total_eval_batch_size: 128
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: cosine
53
+ - lr_scheduler_warmup_ratio: 0.1
54
+ - num_epochs: 10
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F2 |
59
+ |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|
60
+ | 0.4572 | 1.0 | 2317 | 0.4705 | 0.4735 | 0.8620 | 0.7405 |
61
+ | 0.4283 | 2.0 | 4634 | 0.4515 | 0.4774 | 0.9124 | 0.7718 |
62
+ | 0.4115 | 3.0 | 6951 | 0.4485 | 0.4796 | 0.9201 | 0.7773 |
63
+ | 0.4021 | 4.0 | 9268 | 0.4387 | 0.5217 | 0.9068 | 0.7902 |
64
+ | 0.3918 | 5.0 | 11585 | 0.4466 | 0.6111 | 0.8242 | 0.7705 |
65
+ | 0.3879 | 6.0 | 13902 | 0.4337 | 0.5783 | 0.8767 | 0.7947 |
66
+ | 0.383 | 7.0 | 16219 | 0.4336 | 0.5633 | 0.8907 | 0.7980 |
67
+ | 0.3781 | 8.0 | 18536 | 0.4354 | 0.5929 | 0.8660 | 0.7930 |
68
+ | 0.3767 | 9.0 | 20853 | 0.4353 | 0.5980 | 0.8636 | 0.7931 |
69
+ | 0.3712 | 10.0 | 23170 | 0.4360 | 0.6020 | 0.8593 | 0.7917 |
70
+
71
+
72
+ ### Framework versions
73
+
74
+ - Transformers 4.39.1
75
+ - Pytorch 2.5.1+cu121
76
+ - Datasets 3.6.0
77
+ - Tokenizers 0.15.2
final_model/model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a1eade862a6b0bea06c28ea43a3e64fd17158c8bbc531f84fe0346f9ce3b4cf
3
  size 711443456
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91eb96d3922be157532116be345e306074157b7a7a4a7335c0238c11a13104fb
3
  size 711443456
final_model/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e1ef3d87fea343e754d56ca617544c2210d765ee87a075cf9701d0371c164e09
3
- size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bc931ca29050915c36f13d6eefa6efb4cca44f2f3756faa2eeed66e1b2bfd1b
3
+ size 5112
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4604cfa7ec9127271a29927cf1c93b409a80c993fcb18d6179db5e73184cf4e7
3
  size 711443456
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91eb96d3922be157532116be345e306074157b7a7a4a7335c0238c11a13104fb
3
  size 711443456