Update README.md
Browse files
README.md
CHANGED
@@ -9,29 +9,57 @@ pipeline_tag: text-classification
|
|
9 |
|
10 |
# HasinMDG/XLM_Roberta_Large_Sentiment_Toward_Entity_on_Topics_Baseline_Squad
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
|
15 |
2. Training a classification head with features from the fine-tuned Sentence Transformer.
|
16 |
|
|
|
|
|
17 |
## Usage
|
18 |
|
19 |
To use this model for inference, first install the SetFit library:
|
20 |
|
21 |
```bash
|
22 |
-
|
|
|
|
|
23 |
```
|
24 |
|
25 |
-
|
26 |
-
|
27 |
-
```python
|
28 |
-
from setfit import SetFitModel
|
29 |
-
|
30 |
-
# Download from Hub and run inference
|
31 |
-
model = SetFitModel.from_pretrained("HasinMDG/XLM_Roberta_Large_Sentiment_Toward_Entity_on_Topics_Baseline_Squad")
|
32 |
-
# Run inference
|
33 |
-
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
|
34 |
-
```
|
35 |
|
36 |
## BibTeX entry and citation info
|
37 |
|
|
|
9 |
|
10 |
# HasinMDG/XLM_Roberta_Large_Sentiment_Toward_Entity_on_Topics_Baseline_Squad
|
11 |
|
12 |
+
## General description of the model
|
13 |
+
|
14 |
+
Unlike a classical sentiment classifier, this model was built to measure the sentiment towards a particular entity on a particular pre-determined topic
|
15 |
+
|
16 |
+
```python
|
17 |
+
example = "I pity Facebook for their lack of commitment against global warming , I like google for its support of increased education",
|
18 |
+
|
19 |
+
```
|
20 |
+
In the previous example we notice that depending on the type of entity (Google or Facebook) and depending on the type of topics (education or climate change) we have two types of sentiments
|
21 |
+
|
22 |
+
```python
|
23 |
+
model = ....
|
24 |
+
text = "I pity Facebook for their lack of commitment against global warming , I like google for its support of increased education"
|
25 |
+
|
26 |
+
# Predict the sentiment towards Facebook (entity) on Climate change (topic)
|
27 |
+
sentiment, probability = model.predict(text, topic="climate change", entity= "Facebook")
|
28 |
+
# sentiment = "negative
|
29 |
+
|
30 |
+
# Predict the sentiment towards Google (entity) on Education (topic)
|
31 |
+
sentiment, probability = model.predict(text, topic="climate change", entity= "Facebook")
|
32 |
+
# Sentiment = "positive"
|
33 |
+
|
34 |
+
# Predict the sentiment towards Google (entity) on Climate Change (topic)
|
35 |
+
sentiment, probability = model.predict(text, topic="climate change", entity= "Facebook")
|
36 |
+
# Sentiment = "neutral" / "not_found"
|
37 |
+
|
38 |
+
# Predict the sentiment towards Facebook (entity) on Education (topic)
|
39 |
+
sentiment, probability = model.predict(text, topic="climate change", entity= "Facebook")
|
40 |
+
# Sentiment = "neutral" / "not_found"
|
41 |
+
|
42 |
+
```
|
43 |
+
## Training
|
44 |
+
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for sentiment classification.
|
45 |
+
The model has been trained using an efficient few-shot learning technique that involves:
|
46 |
|
47 |
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
|
48 |
2. Training a classification head with features from the fine-tuned Sentence Transformer.
|
49 |
|
50 |
+
The Training data can be downloaded from [here](https://docs.google.com/spreadsheets/d/1BVDardwVs04ZWmc5_Eg62Lyr_w_OuXysQwhne8ErkoA/edit?usp=sharing)
|
51 |
+
|
52 |
## Usage
|
53 |
|
54 |
To use this model for inference, first install the SetFit library:
|
55 |
|
56 |
```bash
|
57 |
+
pip install -U dataset
|
58 |
+
python -m pip install setfit==0.5.0
|
59 |
+
pip install -U sentence_transformers
|
60 |
```
|
61 |
|
62 |
+
For a global overview of the pipeline used for inference please refer to this [colab notebook](https://colab.research.google.com/drive/1GgEGrhQZfA1pbcB9Zl0VtV7L5wXdh6vj?usp=sharing)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
|
64 |
## BibTeX entry and citation info
|
65 |
|