SalDaMush commited on
Commit
43427b3
·
verified ·
1 Parent(s): dc273b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -3
README.md CHANGED
@@ -1,3 +1,98 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-classification
5
+ - table-question-answering
6
+ - sentence-similarity
7
+ language:
8
+ - ar
9
+ - en
10
+ pretty_name: Combined Arabic NLI Dataset
11
+ size_categories:
12
+ - 10K<n<100K
13
+ ---
14
+
15
+ # Dataset Card for Combined Arabic NLI Dataset
16
+
17
+ This dataset is a comprehensive collection of Arabic sentence pairs for Natural Language Inference (NLI), created by combining all five Arabic splits from the `MoritzLaurer/multilingual-NLI-26lang-2mil7` dataset. It contains sentence pairs with entailment, neutral, and contradiction labels, making it a valuable general-purpose resource for Arabic NLP.
18
+
19
+ ## Dataset Details
20
+
21
+ ### Dataset Description
22
+
23
+ This dataset merges five distinct Arabic NLI sources into a single, unified collection, simplifying access for training and evaluation. The key value of this dataset is its diversity, as it pulls from various domains including `MNLI`, `Fever-NLI`, `ANLI`, `WANLI`, and `LingNLI`. It is suitable for training and evaluating Arabic models on the standard 3-way NLI task (entailment, neutral, contradiction) and serves as a strong base for developing text classifiers and zero-shot models in Arabic.
24
+
25
+ - **Curated by:** Salah Abdo
26
+ - **Language(s) (NLP):** Arabic (`ar`)
27
+
28
+ ### Dataset Sources
29
+
30
+ - **Repository:** [MoritzLaurer/multilingual-NLI-26lang-2mil7](httpss://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7)
31
+ - **Paper:** [Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI](httpss://osf.io/74b8k)
32
+
33
+ ## Uses
34
+
35
+ ### Direct Use
36
+
37
+ This dataset is intended for fine-tuning Arabic NLP models for 3-way NLI classification. It can also be used as a base for creating more specialized datasets. For example, users can filter this data to extract:
38
+ * **Positive pairs** (`label == 0`) for semantic search and sentence similarity tasks.
39
+ * **Contradictory pairs** (`label == 2`) for training models to detect factual inconsistencies.
40
+
41
+ ### Out-of-Scope Use
42
+
43
+ This dataset is not recommended for tasks that require perfect grammatical fluency or evaluation of subtle linguistic nuances, as the data is machine-translated and may contain noise, artifacts, or unnatural phrasing. It should not be used as a benchmark for human-level translation quality.
44
+
45
+ ## Dataset Structure
46
+
47
+ The dataset consists of a single file containing sentence pairs and their corresponding labels.
48
+
49
+ **Data Fields:**
50
+ * `premise`: A string representing the premise sentence in Arabic.
51
+ * `hypothesis`: A string representing the hypothesis sentence in Arabic.
52
+ * `label`: An integer representing the relationship between the premise and hypothesis.
53
+ * `0`: **Entailment** (the hypothesis logically follows from the premise)
54
+ * `1`: **Neutral** (the hypothesis and premise have no logical relationship)
55
+ * `2`: **Contradiction** (the hypothesis logically contradicts the premise)
56
+
57
+ ## Dataset Creation
58
+
59
+ ### Curation Rationale
60
+
61
+ The motivation for creating this dataset was to provide the research community with a single, large, and diverse Arabic NLI resource. By combining all available Arabic splits from the well-regarded `multilingual-NLI-26lang-2mil7` collection, this dataset simplifies the process of training and evaluating robust Arabic NLI models.
62
+
63
+ ### Source Data
64
+
65
+ #### Data Collection and Processing
66
+
67
+ This dataset was prepared by loading all five Arabic splits (`ar_mnli`, `ar_fever`, `ar_anli`, `ar_wanli`, and `ar_ling`) from the `MoritzLaurer/multilingual-NLI-26lang-2mil7` dataset on the Hugging Face Hub. These individual splits were then concatenated into a single, unified dataset, preserving all original columns and labels without any filtering.
68
+
69
+ #### Who are the source data producers?
70
+
71
+ The original English data was produced by the creators of the MultiNLI, ANLI, Fever-NLI, WANLI, and LingNLI datasets. The Arabic data was subsequently generated by Moritz Laurer using state-of-the-art machine translation models (including `opus-mt` and `m2m100_1.2B`).
72
+
73
+ ## Bias, Risks, and Limitations
74
+
75
+ The primary limitation is that the dataset is **machine-translated**. This process can introduce grammatical errors, awkward phrasing, and translation artifacts that do not reflect natural Arabic. Furthermore, any biases present in the original English source datasets (e.g., cultural, political, or topical biases from news and web text) are likely transferred and potentially amplified through the translation process.
76
+
77
+ ### Recommendations
78
+
79
+ Users should be aware of the machine-translated nature of the data and perform a thorough error analysis if using it for production systems. It is recommended to use this dataset for fine-tuning models that are already robust to some level of noise.
80
+
81
+ ## Citation
82
+
83
+ If you use this dataset in your work, please cite the original paper for the `multilingual-NLI-26lang-2mil7` dataset:
84
+
85
+ **BibTeX:**
86
+ ```bibtex
87
+ @article{laurer_less_2022,
88
+ title = {Less {Annotating}, {More} {Classifying} – {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT} - {NLI}},
89
+ url = {[https://osf.io/74b8k](https://osf.io/74b8k)},
90
+ language = {en-us},
91
+ urldate = {2022-07-28},
92
+ journal = {Preprint},
93
+ author = {Laurer, Moritz and Atteveldt, Wouter van and Casas, Andreu Salleras and Welbers, Kasper},
94
+ month = jun,
95
+ year = {2022},
96
+ note = {Publisher: Open Science Framework},
97
+ }
98
+ ```