File size: 4,806 Bytes
2e2a2b7
 
 
 
 
 
 
 
 
 
 
8308a83
 
2e2a2b7
8308a83
 
2e2a2b7
8308a83
 
 
2e2a2b7
 
 
 
 
 
 
 
 
 
 
5c75fd7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---
dataset_info:
  features:
  - name: query
    dtype: string
  - name: positive_passages
    sequence: string
  - name: negative_passages
    sequence: string
  splits:
  - name: train
    num_bytes: 361146987
    num_examples: 398398
  - name: dev
    num_bytes: 14493923
    num_examples: 4030
  - name: test
    num_bytes: 10891808
    num_examples: 6795
  download_size: 153841910
  dataset_size: 386532718
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: dev
    path: data/dev-*
  - split: test
    path: data/test-*
---
# Dataset Detail
this dataset is processed from 3 source of thai dataset consist of
- miracl/miracl
- facebook/xnli
- castorini/mr-tydi
- castorini/mr-tydi-corpus
## processing script
here is the precessing script I use
### miracl/miracl
```python
def create_miracl_datasets(datasets):
  """
  nothing just extract texts
  """
  datasets_ = {
          'query': [],
          'positive_passages': [],
          'negative_passages': [],
      }
  for data in tqdm(datasets):
    datasets_['query'].append(data['query'])
    negative_passages = []
    for negative_passage in data['negative_passages']:
      negative_passages.append(negative_passage['text'])
    datasets_['negative_passages'].append(negative_passages)
    positive_passages = []
    for positive_passage in data['positive_passages']:
      positive_passages.append(positive_passage['text'])
    datasets_['positive_passages'].append(positive_passages)
  return Dataset.from_dict(datasets_)
```
ratio
```python
DatasetDict({
    train: Dataset({
        features: ['query', 'positive_passages', 'negative_passages'],
        num_rows: 2972
    })
    eval: Dataset({
        features: ['query', 'positive_passages', 'negative_passages'],
        num_rows: 366
    })
    test: Dataset({
        features: ['query', 'positive_passages', 'negative_passages'],
        num_rows: 367
    })
})
```

### facebook/xnli
```python
def create_xnli_datasets(datasets):
  """
  transform format of ['premise', 'hypothesis', 'label'] to ['query', 'positive_passages', 'negative_passages']
  using contradiction as negative passage pair and
  neutral, entailment -> possitive passage pair
  premise as passage (premise -> evidence)
  hypothesis as query (hypothesis so called question so can be used as query)
  """
  datasets_ = {
          'query': [],
          'positive_passages': [],
          'negative_passages': []
      }
  for data in tqdm(datasets):
    datasets_['query'].append(data['premise'])
    if data['label'] == 'contradiction':
      datasets_['positive_passages'].append([])
      datasets_['negative_passages'].append([data['hypothesis']])
    elif data['label'] == 'neutral' or 'entailment':
      datasets_['positive_passages'].append([data['hypothesis']])
      datasets_['negative_passages'].append([])
  return Dataset.from_dict(datasets_)
```
ratio
```python
DatasetDict({
    train: Dataset({
        features: ['query', 'positive_passages', 'negative_passages'],
        num_rows: 392702
    })
    eval: Dataset({
        features: ['query', 'positive_passages', 'negative_passages'],
        num_rows: 2490
    })
    test: Dataset({
        features: ['query', 'positive_passages', 'negative_passages'],
        num_rows: 5010
    })
})
```

### castorini/mr-tydi
```python
def create_tydi_datasets(datasets, corpus, train = False):
  """
  both dev, test set have only docid which may can be retrieve from the corpus
  """
  cor_df = corpus.to_pandas()
  datasets_ = {
          'query': [],
          'positive_passages': [],
          'negative_passages': [],
      }
  for data in tqdm(datasets):
    datasets_['query'].append(data['query'])
    if train:
      negative_passages = []
      for negative_passage in data['negative_passages']:
        negative_passages.append(negative_passage['text'])
      datasets_['negative_passages'].append(negative_passages)
    else:
      datasets_['negative_passages'].append([])
    positive_passages = []
    for positive_passage in data['positive_passages']:
      search_value = positive_passage['docid']
      text = cor_df[cor_df["docid"] == search_value].text.values[0]
      # if text.empty:
      #   continue
      positive_passages.append(text)
    datasets_['positive_passages'].append(positive_passages)
  return Dataset.from_dict(datasets_)
```

ratio
```python
DatasetDict({
    train: Dataset({
        features: ['query_id', 'query', 'positive_passages', 'negative_passages'],
        num_rows: 3319
    })
    dev: Dataset({
        features: ['query_id', 'query', 'positive_passages', 'negative_passages'],
        num_rows: 807
    })
    test: Dataset({
        features: ['query_id', 'query', 'positive_passages', 'negative_passages'],
        num_rows: 1190
    })
})
```