Datasets:
File size: 6,575 Bytes
003bd04 df3fd7b ea77161 003bd04 026d0e2 f42214b 026d0e2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
---
annotations_creators:
- expert-generated
language:
- sl
language_creators:
- found
- expert-generated
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: Slovene natural language inference dataset
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
- natural-language-inference
dataset_info:
- config_name: default
features:
- name: pair_id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: annotation1
dtype: string
- name: annotator1_id
dtype: string
- name: annotation2
dtype: string
- name: annotator2_id
dtype: string
- name: annotation3
dtype: string
- name: annotator3_id
dtype: string
- name: annotation_final
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1352635
num_examples: 4392
- name: validation
num_bytes: 164561
num_examples: 547
- name: test
num_bytes: 246518
num_examples: 998
download_size: 410093
dataset_size: 1763714
- config_name: public
features:
- name: pair_id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: annotation1
dtype: string
- name: annotator1_id
dtype: string
- name: annotation2
dtype: string
- name: annotator2_id
dtype: string
- name: annotation3
dtype: string
- name: annotator3_id
dtype: string
- name: annotation_final
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1352591
num_examples: 4392
- name: validation
num_bytes: 164517
num_examples: 547
- name: test
num_bytes: 246474
num_examples: 998
download_size: 410093
dataset_size: 1763582
- config_name: private
features:
- name: pair_id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: annotation1
dtype: string
- name: annotator1_id
dtype: string
- name: annotation2
dtype: string
- name: annotator2_id
dtype: string
- name: annotation3
dtype: string
- name: annotator3_id
dtype: string
- name: annotation_final
dtype: string
- name: label
dtype: string
splits:
- name: train
- name: validation
- name: test
download_size: 0
dataset_size: 0
---
# Dataset Card for SI-NLI
### Dataset Summary
SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral". We created the dataset using sentences that appear in the Slovenian reference corpus [ccKres](http://hdl.handle.net/11356/1034). Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) for each candidate sentence pair. The dataset is split into train, validation, and test sets, with sizes of 4,392, 547, and 998.
Only the hypothesis and premise are given in the test set (i.e. no annotations) since SI-NLI is integrated into the Slovene evaluation framework [SloBENCH](https://slobench.cjvt.si/). If you use the dataset to train your models, please consider submitting the test set predictions to SloBENCH to get the evaluation score and see how it compares to others.
If you have access to the private test set (with labels), you can load it instead of the public one via `datasets.load_dataset("cjvt/si_nli", "private", data_dir="<...>")`.
### Supported Tasks and Leaderboards
Natural language inference.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'pair_id': 'P0',
'premise': 'Vendar se je anglikanska večina v grofijah na severu otoka (Ulster) na plebiscitu odločila, da ostane v okviru Velike Britanije.',
'hypothesis': 'A na glasovanju o priključitvi ozemlja k Severni Irski so se prebivalci ulsterskih grofij, pretežno anglikanske veroizpovedi, izrekli o obstanku pod okriljem VB.',
'annotation1': 'entailment',
'annotator1_id': 'annotator_C',
'annotation2': 'entailment',
'annotator2_id': 'annotator_A',
'annotation3': '',
'annotator3_id': '',
'annotation_final': 'entailment',
'label': 'entailment'
}
```
### Data Fields
- `pair_id`: string identifier of the pair (`""` in the test set),
- `premise`: premise sentence,
- `hypothesis`: hypothesis sentence,
- `annotation1`: the first annotation (`""` if not available),
- `annotator1_id`: anonymized identifier of the first annotator (`""` if not available),
- `annotation2`: the second annotation (`""` if not available),
- `annotator2_id`: anonymized identifier of the second annotator (`""` if not available),
- `annotation3`: the third annotation (`""` if not available),
- `annotator3_id`: anonymized identifier of the third annotator (`""` if not available),
- `annotation_final`: aggregated annotation where it could be unanimously determined (`""` if not available or an unanimous agreement could not be reached),
- `label`: aggregated annotation: either same as `annotation_final` (in case of agreement), same as `annotation1` (in case of disagreement), or `""` (in the test set). **Note that examples with disagreement are all put in the training set**. This aggregation is just the most simple possibility and the user may instead do something more advanced based on the individual annotations (e.g., learning with disagreement).
\* A small number of examples did not go through the annotation process because they were constructed by the authors when writing the guidelines. The quality of these was therefore checked by the authors. Such examples do not have the individual annotations and the annotator IDs.
## Additional Information
### Dataset Curators
Matej Klemen, Aleš Žagar, Jaka Čibej, Marko Robnik-Šikonja.
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@misc{sinli,
title = {Slovene Natural Language Inference Dataset {SI}-{NLI}},
author = {Klemen, Matej and {\v Z}agar, Ale{\v s} and {\v C}ibej, Jaka and Robnik-{\v S}ikonja, Marko},
url = {http://hdl.handle.net/11356/1707},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset. |