up example and pipeline usage
Browse files
README.md
CHANGED
@@ -1,11 +1,17 @@
|
|
1 |
---
|
2 |
language:
|
3 |
- pl
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
5 |
tags:
|
6 |
- text
|
7 |
- sentiment
|
8 |
-
-
|
9 |
|
10 |
metrics:
|
11 |
- accuracy
|
@@ -31,15 +37,26 @@ model-index:
|
|
31 |
This model is a fine-tuned version of [dkleczek/bert-base-polish-cased-v1](https://huggingface.co/dkleczek/bert-base-polish-cased-v1) to predict 3-categorical sentiment.
|
32 |
Fine-tuned on 1k sample of manually annotated Twitter data.
|
33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
|
|
|
|
35 |
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
}
|
42 |
|
|
|
43 |
|
44 |
|
45 |
## Intended uses & limitations
|
@@ -49,22 +66,21 @@ Sentiment detection in Polish data (fine-tuned on tweets from political domain).
|
|
49 |
|
50 |
## Training and evaluation data
|
51 |
|
52 |
-
Trained for 3 epochs, mini-batch size of 8.
|
53 |
-
Training results: loss: 0.1358926964368792
|
54 |
|
55 |
|
56 |
## Evaluation procedure
|
57 |
|
58 |
-
|
59 |
It achieves the following results on the test set (10%):
|
60 |
|
61 |
-
|
62 |
|
63 |
-
|
64 |
|
65 |
-
|
66 |
|
67 |
-
|
68 |
|
69 |
precision recall f1-score support
|
70 |
|
|
|
1 |
---
|
2 |
language:
|
3 |
- pl
|
4 |
+
|
5 |
+
pipeline_tag: text-classification
|
6 |
+
|
7 |
+
widget:
|
8 |
+
- text: "Cała ta śmieszna debata była próbą ukrycia problemów gospodarczych jakie są i nadejdą, pytania w większości o mało istotnych sprawach"
|
9 |
+
example_title: "example 1"
|
10 |
|
11 |
tags:
|
12 |
- text
|
13 |
- sentiment
|
14 |
+
- politics
|
15 |
|
16 |
metrics:
|
17 |
- accuracy
|
|
|
37 |
This model is a fine-tuned version of [dkleczek/bert-base-polish-cased-v1](https://huggingface.co/dkleczek/bert-base-polish-cased-v1) to predict 3-categorical sentiment.
|
38 |
Fine-tuned on 1k sample of manually annotated Twitter data.
|
39 |
|
40 |
+
Mapping':
|
41 |
+
id2label= {
|
42 |
+
0:'Negative',
|
43 |
+
1:'Neutral',
|
44 |
+
2:'Positive'
|
45 |
+
}
|
46 |
+
|
47 |
+
```
|
48 |
+
from transformers import pipeline
|
49 |
|
50 |
+
model_path = "eevvgg/PaReS-sentimenTw-political-PL"
|
51 |
+
sentiment_task = pipeline(task = "sentiment-analysis", model = model_path, tokenizer = model_path)
|
52 |
|
53 |
+
sequence = ["Cała ta śmieszna debata była próbą ukrycia problemów gospodarczych jakie są i nadejdą, pytania w większości o mało istotnych sprawach",
|
54 |
+
"Brawo panie ministrze!"]
|
55 |
+
|
56 |
+
result = sentiment_task(sequence)
|
57 |
+
labels = [i['label'] for i in result] # ['Negative', 'Positive']
|
|
|
58 |
|
59 |
+
```
|
60 |
|
61 |
|
62 |
## Intended uses & limitations
|
|
|
66 |
|
67 |
## Training and evaluation data
|
68 |
|
69 |
+
#### Trained for 3 epochs, mini-batch size of 8.
|
70 |
+
#### Training results: loss: 0.1358926964368792
|
71 |
|
72 |
|
73 |
## Evaluation procedure
|
74 |
|
|
|
75 |
It achieves the following results on the test set (10%):
|
76 |
|
77 |
+
#### Num examples = 100
|
78 |
|
79 |
+
#### Batch size = 8
|
80 |
|
81 |
+
#### Accuracy = 0.950
|
82 |
|
83 |
+
#### F1-macro = 0.944
|
84 |
|
85 |
precision recall f1-score support
|
86 |
|