File size: 5,429 Bytes
43427b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: mit
task_categories:
- text-classification
- table-question-answering
- sentence-similarity
language:
- ar
- en
pretty_name: Combined Arabic NLI Dataset
size_categories:
- 10K<n<100K
---

# Dataset Card for Combined Arabic NLI Dataset

This dataset is a comprehensive collection of Arabic sentence pairs for Natural Language Inference (NLI), created by combining all five Arabic splits from the `MoritzLaurer/multilingual-NLI-26lang-2mil7` dataset. It contains sentence pairs with entailment, neutral, and contradiction labels, making it a valuable general-purpose resource for Arabic NLP.

## Dataset Details

### Dataset Description

This dataset merges five distinct Arabic NLI sources into a single, unified collection, simplifying access for training and evaluation. The key value of this dataset is its diversity, as it pulls from various domains including `MNLI`, `Fever-NLI`, `ANLI`, `WANLI`, and `LingNLI`. It is suitable for training and evaluating Arabic models on the standard 3-way NLI task (entailment, neutral, contradiction) and serves as a strong base for developing text classifiers and zero-shot models in Arabic.

- **Curated by:** Salah Abdo
- **Language(s) (NLP):** Arabic (`ar`)

### Dataset Sources

- **Repository:** [MoritzLaurer/multilingual-NLI-26lang-2mil7](httpss://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7)
- **Paper:** [Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI](httpss://osf.io/74b8k)

## Uses

### Direct Use

This dataset is intended for fine-tuning Arabic NLP models for 3-way NLI classification. It can also be used as a base for creating more specialized datasets. For example, users can filter this data to extract:
* **Positive pairs** (`label == 0`) for semantic search and sentence similarity tasks.
* **Contradictory pairs** (`label == 2`) for training models to detect factual inconsistencies.

### Out-of-Scope Use

This dataset is not recommended for tasks that require perfect grammatical fluency or evaluation of subtle linguistic nuances, as the data is machine-translated and may contain noise, artifacts, or unnatural phrasing. It should not be used as a benchmark for human-level translation quality.

## Dataset Structure

The dataset consists of a single file containing sentence pairs and their corresponding labels.

**Data Fields:**
* `premise`: A string representing the premise sentence in Arabic.
* `hypothesis`: A string representing the hypothesis sentence in Arabic.
* `label`: An integer representing the relationship between the premise and hypothesis.
    * `0`: **Entailment** (the hypothesis logically follows from the premise)
    * `1`: **Neutral** (the hypothesis and premise have no logical relationship)
    * `2`: **Contradiction** (the hypothesis logically contradicts the premise)

## Dataset Creation

### Curation Rationale

The motivation for creating this dataset was to provide the research community with a single, large, and diverse Arabic NLI resource. By combining all available Arabic splits from the well-regarded `multilingual-NLI-26lang-2mil7` collection, this dataset simplifies the process of training and evaluating robust Arabic NLI models.

### Source Data

#### Data Collection and Processing

This dataset was prepared by loading all five Arabic splits (`ar_mnli`, `ar_fever`, `ar_anli`, `ar_wanli`, and `ar_ling`) from the `MoritzLaurer/multilingual-NLI-26lang-2mil7` dataset on the Hugging Face Hub. These individual splits were then concatenated into a single, unified dataset, preserving all original columns and labels without any filtering.

#### Who are the source data producers?

The original English data was produced by the creators of the MultiNLI, ANLI, Fever-NLI, WANLI, and LingNLI datasets. The Arabic data was subsequently generated by Moritz Laurer using state-of-the-art machine translation models (including `opus-mt` and `m2m100_1.2B`).

## Bias, Risks, and Limitations

The primary limitation is that the dataset is **machine-translated**. This process can introduce grammatical errors, awkward phrasing, and translation artifacts that do not reflect natural Arabic. Furthermore, any biases present in the original English source datasets (e.g., cultural, political, or topical biases from news and web text) are likely transferred and potentially amplified through the translation process.

### Recommendations

Users should be aware of the machine-translated nature of the data and perform a thorough error analysis if using it for production systems. It is recommended to use this dataset for fine-tuning models that are already robust to some level of noise.

## Citation

If you use this dataset in your work, please cite the original paper for the `multilingual-NLI-26lang-2mil7` dataset:

**BibTeX:**
```bibtex
@article{laurer_less_2022,
    title = {Less {Annotating}, {More} {Classifying} – {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT} - {NLI}},
    url = {[https://osf.io/74b8k](https://osf.io/74b8k)},
    language = {en-us},
    urldate = {2022-07-28},
    journal = {Preprint},
    author = {Laurer, Moritz and Atteveldt, Wouter van and Casas, Andreu Salleras and Welbers, Kasper},
    month = jun,
    year = {2022},
    note = {Publisher: Open Science Framework},
}
```