Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -27,5 +27,18 @@ To browse an interactive visualization of the *Mawqif* dataset, please click [he
|
|
27 |
# Citation
|
28 |
If you feel our paper and resources are useful, please consider citing our work!
|
29 |
```
|
30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
```
|
|
|
27 |
# Citation
|
28 |
If you feel our paper and resources are useful, please consider citing our work!
|
29 |
```
|
30 |
+
@inproceedings{alturayeif-etal-2022-mawqif,
|
31 |
+
title = "Mawqif: A Multi-label {A}rabic Dataset for Target-specific Stance Detection",
|
32 |
+
author = "Alturayeif, Nora Saleh and
|
33 |
+
Luqman, Hamzah Abdullah and
|
34 |
+
Ahmed, Moataz Aly Kamaleldin",
|
35 |
+
booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
|
36 |
+
month = dec,
|
37 |
+
year = "2022",
|
38 |
+
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
|
39 |
+
publisher = "Association for Computational Linguistics",
|
40 |
+
url = "https://aclanthology.org/2022.wanlp-1.16",
|
41 |
+
pages = "174--184",
|
42 |
+
abstract = "Social media platforms are becoming inherent parts of people{'}s daily life to express opinions and stances toward topics of varying polarities. Stance detection determines the viewpoint expressed in a text toward a target. While communication on social media (e.g., Twitter) takes place in more than 40 languages, the majority of stance detection research has been focused on English. Although some efforts have recently been made to develop stance detection datasets in other languages, no similar efforts seem to have considered the Arabic language. In this paper, we present Mawqif, the first Arabic dataset for target-specific stance detection, composed of 4,121 tweets annotated with stance, sentiment, and sarcasm polarities. Mawqif, as a multi-label dataset, can provide more opportunities for studying the interaction between different opinion dimensions and evaluating a multi-task model. We provide a detailed description of the dataset, present an analysis of the produced annotation, and evaluate four BERT-based models on it. Our best model achieves a macro-F1 of 78.89{\%}, which shows that there is ample room for improvement on this challenging task. We publicly release our dataset, the annotation guidelines, and the code of the experiments.",
|
43 |
+
}
|
44 |
```
|