Update README.md
Browse files
README.md
CHANGED
|
@@ -1,9 +1,32 @@
|
|
| 1 |
---
|
| 2 |
metrics:
|
| 3 |
- f1
|
| 4 |
-
Training Loss: 0.5291
|
| 5 |
-
Validation Loss: 0.570805
|
| 6 |
-
F1: 0.802939
|
| 7 |
language:
|
| 8 |
- en
|
| 9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
metrics:
|
| 3 |
- f1
|
|
|
|
|
|
|
|
|
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
+
datasets:
|
| 7 |
+
- argilla/twitter-coronavirus
|
| 8 |
+
tags:
|
| 9 |
+
- generated_from_trainer
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
The following hyperparameters were used during training:
|
| 13 |
+
- learning_rate: 5e-5
|
| 14 |
+
- train_batch_size: 16
|
| 15 |
+
- eval_batch_size: 16
|
| 16 |
+
- seed: 42
|
| 17 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 18 |
+
- lr_scheduler_type: linear
|
| 19 |
+
- num_epochs: 3
|
| 20 |
+
- mixed_precision_training: Native AMP
|
| 21 |
+
- warmup_ratio: 0.1
|
| 22 |
+
- weight_decay=1e-2
|
| 23 |
+
|
| 24 |
+
### Training results
|
| 25 |
+
|
| 26 |
+
| Training Loss | Epoch | Validation Loss | F1 | F1 Macro |
|
| 27 |
+
|:-------------:|:-----:|:---------------:|:--------:|:--------:|
|
| 28 |
+
| 1.3957 | 1.0 | 1.0134 | 0.242860 | 0.124580 |
|
| 29 |
+
| 0.8715 | 2.0 | 0.6892 | 0.243673 | 0.113322 |
|
| 30 |
+
| 0.6085 | 3.0 | 0.4943 | 0.319262 | 0.191744 |
|
| 31 |
+
| 0.6085 | 3.0 | 0.4943 | 0.319262 | 0.191744 |
|
| 32 |
+
| 0.6085 | 3.0 | 0.4943 | 0.319262 | 0.191744 |
|