mircoboettcher commited on
Commit
03a8913
·
verified ·
1 Parent(s): f4829f5

End of training

Browse files
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: dslim/bert-base-NER
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - wnut_17
9
+ metrics:
10
+ - precision
11
+ - recall
12
+ - f1
13
+ - accuracy
14
+ model-index:
15
+ - name: bert-wnut17-optimized
16
+ results:
17
+ - task:
18
+ name: Token Classification
19
+ type: token-classification
20
+ dataset:
21
+ name: wnut_17
22
+ type: wnut_17
23
+ config: wnut_17
24
+ split: test
25
+ args: wnut_17
26
+ metrics:
27
+ - name: Precision
28
+ type: precision
29
+ value: 0.5794655414908579
30
+ - name: Recall
31
+ type: recall
32
+ value: 0.3818350324374421
33
+ - name: F1
34
+ type: f1
35
+ value: 0.46033519553072627
36
+ - name: Accuracy
37
+ type: accuracy
38
+ value: 0.9485338120885697
39
+ ---
40
+
41
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
42
+ should probably proofread and complete it, then remove this comment. -->
43
+
44
+ # bert-wnut17-optimized
45
+
46
+ This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the wnut_17 dataset.
47
+ It achieves the following results on the evaluation set:
48
+ - Loss: 0.2901
49
+ - Precision: 0.5795
50
+ - Recall: 0.3818
51
+ - F1: 0.4603
52
+ - Accuracy: 0.9485
53
+
54
+ ## Model description
55
+
56
+ More information needed
57
+
58
+ ## Intended uses & limitations
59
+
60
+ More information needed
61
+
62
+ ## Training and evaluation data
63
+
64
+ More information needed
65
+
66
+ ## Training procedure
67
+
68
+ ### Training hyperparameters
69
+
70
+ The following hyperparameters were used during training:
71
+ - learning_rate: 2.631245451057452e-05
72
+ - train_batch_size: 16
73
+ - eval_batch_size: 16
74
+ - seed: 42
75
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
76
+ - lr_scheduler_type: linear
77
+ - lr_scheduler_warmup_ratio: 0.1
78
+ - num_epochs: 3
79
+
80
+ ### Training results
81
+
82
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
83
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
84
+ | No log | 1.0 | 213 | 0.2365 | 0.5265 | 0.4235 | 0.4694 | 0.9478 |
85
+ | No log | 2.0 | 426 | 0.2692 | 0.5710 | 0.3689 | 0.4482 | 0.9480 |
86
+ | 0.2086 | 3.0 | 639 | 0.2901 | 0.5795 | 0.3818 | 0.4603 | 0.9485 |
87
+
88
+
89
+ ### Framework versions
90
+
91
+ - Transformers 4.47.1
92
+ - Pytorch 2.5.1+cu121
93
+ - Datasets 3.2.0
94
+ - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b327f68523f9ca654a9d3c730d9f176fe29029a87930a4933b4316f89c85e5a6
3
  size 430942044
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5929bf1db941cb0da6a38e23852b7a36f4ef271d916fe2f310ce0116b0ce36d4
3
  size 430942044
runs/Jan15_05-55-18_782c1a5fdbb2/events.out.tfevents.1736920522.782c1a5fdbb2.207.10 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d98737dcba7eca7015c4263433ea8ff99e04b6ff59607158d0312007c5a202c1
3
- size 6951
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:daf4a54ed88f5039716e32e959b10410f3ac16012ff69ebde615b7606f49940b
3
+ size 7777