parquet-converter commited on
Commit
1d2a241
·
1 Parent(s): 3ea8b7e

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,318 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - other
10
- multilinguality:
11
- - monolingual
12
- pretty_name: WikiQA
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - question-answering
19
- task_ids:
20
- - open-domain-qa
21
- paperswithcode_id: wikiqa
22
- dataset_info:
23
- features:
24
- - name: question_id
25
- dtype: string
26
- - name: question
27
- dtype: string
28
- - name: document_title
29
- dtype: string
30
- - name: answer
31
- dtype: string
32
- - name: label
33
- dtype:
34
- class_label:
35
- names:
36
- 0: '0'
37
- 1: '1'
38
- splits:
39
- - name: test
40
- num_bytes: 1337903
41
- num_examples: 6165
42
- - name: train
43
- num_bytes: 4469148
44
- num_examples: 20360
45
- - name: validation
46
- num_bytes: 591833
47
- num_examples: 2733
48
- download_size: 7094233
49
- dataset_size: 6398884
50
- ---
51
-
52
- # Dataset Card for "wiki_qa"
53
-
54
- ## Table of Contents
55
- - [Dataset Description](#dataset-description)
56
- - [Dataset Summary](#dataset-summary)
57
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
58
- - [Languages](#languages)
59
- - [Dataset Structure](#dataset-structure)
60
- - [Data Instances](#data-instances)
61
- - [Data Fields](#data-fields)
62
- - [Data Splits](#data-splits)
63
- - [Dataset Creation](#dataset-creation)
64
- - [Curation Rationale](#curation-rationale)
65
- - [Source Data](#source-data)
66
- - [Annotations](#annotations)
67
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
68
- - [Considerations for Using the Data](#considerations-for-using-the-data)
69
- - [Social Impact of Dataset](#social-impact-of-dataset)
70
- - [Discussion of Biases](#discussion-of-biases)
71
- - [Other Known Limitations](#other-known-limitations)
72
- - [Additional Information](#additional-information)
73
- - [Dataset Curators](#dataset-curators)
74
- - [Licensing Information](#licensing-information)
75
- - [Citation Information](#citation-information)
76
- - [Contributions](#contributions)
77
-
78
- ## Dataset Description
79
-
80
- - **Homepage:** [https://www.microsoft.com/en-us/download/details.aspx?id=52419](https://www.microsoft.com/en-us/download/details.aspx?id=52419)
81
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
82
- - **Paper:** [WikiQA: A Challenge Dataset for Open-Domain Question Answering](https://aclanthology.org/D15-1237/)
83
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
84
- - **Size of downloaded dataset files:** 6.77 MB
85
- - **Size of the generated dataset:** 6.10 MB
86
- - **Total amount of disk used:** 12.87 MB
87
-
88
- ### Dataset Summary
89
-
90
- Wiki Question Answering corpus from Microsoft.
91
-
92
- The WikiQA corpus is a publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering.
93
-
94
- ### Supported Tasks and Leaderboards
95
-
96
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
97
-
98
- ### Languages
99
-
100
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
-
102
- ## Dataset Structure
103
-
104
- ### Data Instances
105
-
106
- #### default
107
-
108
- - **Size of downloaded dataset files:** 6.77 MB
109
- - **Size of the generated dataset:** 6.10 MB
110
- - **Total amount of disk used:** 12.87 MB
111
-
112
- An example of 'train' looks as follows.
113
- ```
114
- {
115
- "answer": "Glacier caves are often called ice caves , but this term is properly used to describe bedrock caves that contain year-round ice.",
116
- "document_title": "Glacier cave",
117
- "label": 0,
118
- "question": "how are glacier caves formed?",
119
- "question_id": "Q1"
120
- }
121
- ```
122
-
123
- ### Data Fields
124
-
125
- The data fields are the same among all splits.
126
-
127
- #### default
128
- - `question_id`: a `string` feature.
129
- - `question`: a `string` feature.
130
- - `document_title`: a `string` feature.
131
- - `answer`: a `string` feature.
132
- - `label`: a classification label, with possible values including `0` (0), `1` (1).
133
-
134
- ### Data Splits
135
-
136
- | name |train|validation|test|
137
- |-------|----:|---------:|---:|
138
- |default|20360| 2733|6165|
139
-
140
- ## Dataset Creation
141
-
142
- ### Curation Rationale
143
-
144
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
145
-
146
- ### Source Data
147
-
148
- #### Initial Data Collection and Normalization
149
-
150
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
151
-
152
- #### Who are the source language producers?
153
-
154
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
155
-
156
- ### Annotations
157
-
158
- #### Annotation process
159
-
160
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
161
-
162
- #### Who are the annotators?
163
-
164
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
165
-
166
- ### Personal and Sensitive Information
167
-
168
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
169
-
170
- ## Considerations for Using the Data
171
-
172
- ### Social Impact of Dataset
173
-
174
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
175
-
176
- ### Discussion of Biases
177
-
178
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
179
-
180
- ### Other Known Limitations
181
-
182
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
183
-
184
- ## Additional Information
185
-
186
- ### Dataset Curators
187
-
188
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
189
-
190
- ### Licensing Information
191
-
192
- MICROSOFT RESEARCH DATA LICENSE AGREEMENT
193
- FOR
194
- MICROSOFT RESEARCH WIKIQA CORPUS
195
-
196
- These license terms are an agreement between Microsoft Corporation (or based on where you live, one of its
197
- affiliates) and you. Please read them. They apply to the data associated with this license above, which includes
198
- the media on which you received it, if any. The terms also apply to any Microsoft:
199
- - updates,
200
- - supplements,
201
- - Internet-based services, and
202
- - support services
203
- for this data, unless other terms accompany those items. If so, those terms apply.
204
- BY USING THE DATA, YOU ACCEPT THESE TERMS. IF YOU DO NOT ACCEPT THEM, DO NOT USE THE DATA.
205
- If you comply with these license terms, you have the rights below.
206
-
207
- 1. SCOPE OF LICENSE.
208
- a. You may use, copy, modify, create derivative works, and distribute the Dataset:
209
- i. for research and technology development purposes only. Examples of research and technology
210
- development uses are teaching, academic research, public demonstrations and experimentation ;
211
- and
212
- ii. to publish (or present papers or articles) on your results from using such Dataset.
213
- b. The data is licensed, not sold. This agreement only gives you some rights to use the data. Microsoft reserves
214
- all other rights. Unless applicable law gives you more rights despite this limitation, you may use the data only
215
- as expressly permitted in this agreement. In doing so, you must comply with any technical limitations in the
216
- data that only allow you to use it in certain ways.
217
- You may not
218
- - work around any technical limitations in the data;
219
- - reverse engineer, decompile or disassemble the data, except and only to the extent that applicable law
220
- expressly permits, despite this limitation;
221
- - rent, lease or lend the data;
222
- - transfer the data or this agreement to any third party; or
223
- - use the data directly in a commercial product without Microsoft’s permission.
224
-
225
- 2. DISTRIBUTION REQUIREMENTS:
226
- a. If you distribute the Dataset or any derivative works of the Dataset, you will distribute them under the
227
- same terms and conditions as in this Agreement, and you will not grant other rights to the Dataset or
228
- derivative works that are different from those provided by this Agreement.
229
- b. If you have created derivative works of the Dataset, and distribute such derivative works, you will
230
- cause the modified files to carry prominent notices so that recipients know that they are not receiving
231
- Page 1 of 3the original Dataset. Such notices must state: (i) that you have changed the Dataset; and (ii) the date
232
- of any changes.
233
-
234
- 3. DISTRIBUTION RESTRICTIONS. You may not: (a) alter any copyright, trademark or patent notice in the
235
- Dataset; (b) use Microsoft’s trademarks in a way that suggests your derivative works or modifications come from
236
- or are endorsed by Microsoft; (c) include the Dataset in malicious, deceptive or unlawful programs.
237
-
238
- 4. OWNERSHIP. Microsoft retains all right, title, and interest in and to any Dataset provided to you under this
239
- Agreement. You acquire no interest in the Dataset you may receive under the terms of this Agreement.
240
-
241
- 5. LICENSE TO MICROSOFT. Microsoft is granted back, without any restrictions or limitations, a non-exclusive,
242
- perpetual, irrevocable, royalty-free, assignable and sub-licensable license, to reproduce, publicly perform or
243
- display, use, modify, post, distribute, make and have made, sell and transfer your modifications to and/or
244
- derivative works of the Dataset, for any purpose.
245
-
246
- 6. FEEDBACK. If you give feedback about the Dataset to Microsoft, you give to Microsoft, without charge, the right
247
- to use, share and commercialize your feedback in any way and for any purpose. You also give to third parties,
248
- without charge, any patent rights needed for their products, technologies and services to use or interface with
249
- any specific parts of a Microsoft dataset or service that includes the feedback. You will not give feedback that is
250
- subject to a license that requires Microsoft to license its Dataset or documentation to third parties because we
251
- include your feedback in them. These rights survive this Agreement.
252
-
253
- 7. EXPORT RESTRICTIONS. The Dataset is subject to United States export laws and regulations. You must
254
- comply with all domestic and international export laws and regulations that apply to the Dataset. These laws
255
- include restrictions on destinations, end users and end use. For additional information, see
256
- www.microsoft.com/exporting.
257
-
258
- 8. ENTIRE AGREEMENT. This Agreement, and the terms for supplements, updates, Internet-based services and
259
- support services that you use, are the entire agreement for the Dataset.
260
-
261
- 9. SUPPORT SERVICES. Because this data is “as is,” we may not provide support services for it.
262
-
263
- 10. APPLICABLE LAW.
264
- a. United States. If you acquired the software in the United States, Washington state law governs the
265
- interpretation of this agreement and applies to claims for breach of it, regardless of conflict of laws principles.
266
- The laws of the state where you live govern all other claims, including claims under state consumer protection
267
- laws, unfair competition laws, and in tort.
268
- b. Outside the United States. If you acquired the software in any other country, the laws of that country
269
- apply.
270
-
271
- 11. LEGAL EFFECT. This Agreement describes certain legal rights. You may have other rights under the laws of your
272
- country. You may also have rights with respect to the party from whom you acquired the Dataset. This
273
- Agreement does not change your rights under the laws of your country if the laws of your country do not permit
274
- it to do so.
275
-
276
- 12. DISCLAIMER OF WARRANTY. The Dataset is licensed “as-is.” You bear the risk of using it. Microsoft gives no
277
- express warranties, guarantees or conditions. You may have additional consumer rights or statutory guarantees
278
- under your local laws which this agreement cannot change. To the extent permitted under your local laws,
279
- Microsoft excludes the implied warranties of merchantability, fitness for a particular purpose and non-
280
- infringement.
281
-
282
- 13. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
283
- MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO U.S. $5.00. YOU CANNOT RECOVER ANY
284
- OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL
285
- DAMAGES.
286
-
287
- This limitation applies to
288
- - anything related to the software, services, content (including code) on third party Internet sites, or third party
289
- programs; and Page 2 of 3
290
- - claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence, or other
291
- tort to the extent permitted by applicable law.
292
-
293
- It also applies even if Microsoft knew or should have known about the possibility of the damages. The above
294
- limitation or exclusion may not apply to you because your country may not allow the exclusion or limitation of
295
- incidental, consequential or other damages.
296
-
297
- ### Citation Information
298
-
299
- ```
300
- @inproceedings{yang-etal-2015-wikiqa,
301
- title = "{W}iki{QA}: A Challenge Dataset for Open-Domain Question Answering",
302
- author = "Yang, Yi and
303
- Yih, Wen-tau and
304
- Meek, Christopher",
305
- booktitle = "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
306
- month = sep,
307
- year = "2015",
308
- address = "Lisbon, Portugal",
309
- publisher = "Association for Computational Linguistics",
310
- url = "https://aclanthology.org/D15-1237",
311
- doi = "10.18653/v1/D15-1237",
312
- pages = "2013--2018",
313
- }
314
- ```
315
-
316
- ### Contributions
317
-
318
- Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "Wiki Question Answering corpus from Microsoft\n", "citation": "@InProceedings{YangYihMeek:EMNLP2015:WikiQA,\n author = {{Yi}, Yang and {Wen-tau}, Yih and {Christopher} Meek},\n title = \"{WikiQA: A Challenge Dataset for Open-Domain Question Answering}\",\n journal = {Association for Computational Linguistics},\n year = 2015,\n doi = {10.18653/v1/D15-1237},\n pages = {2013\u20132018},\n}\n", "homepage": "https://www.microsoft.com/en-us/download/details.aspx?id=52419", "license": "", "features": {"question_id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "document_title": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["0", "1"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "supervised_keys": null, "builder_name": "wiki_qa", "config_name": "default", "version": {"version_str": "0.1.0", "description": null, "datasets_version_to_prepare": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1337903, "num_examples": 6165, "dataset_name": "wiki_qa"}, "train": {"name": "train", "num_bytes": 4469148, "num_examples": 20360, "dataset_name": "wiki_qa"}, "validation": {"name": "validation", "num_bytes": 591833, "num_examples": 2733, "dataset_name": "wiki_qa"}}, "download_checksums": {"https://download.microsoft.com/download/E/5/f/E5FCFCEE-7005-4814-853D-DAA7C66507E0/WikiQACorpus.zip": {"num_bytes": 7094233, "checksum": "467c13f9e104552c0a9c16f41836ca8d89f9c0cc4b6e4355e104d5c3109ffa45"}}, "download_size": 7094233, "dataset_size": 6398884, "size_in_bytes": 13493117}}
 
 
default/wiki_qa-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:047642865627a6c2bad6a712c67f341c15019f11a2093479a6fd6c5260c77e4d
3
+ size 593690
default/wiki_qa-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5f0ac9baac4c4ae3349eb1fdf6ec3ccc3ba18f88519da61f900993613d0f1dd
3
+ size 2004000
default/wiki_qa-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da5c338de1c24c96ed0c94997de698c09348b8c6f4bf053ad8f79391e3901498
3
+ size 263515
wiki_qa.py DELETED
@@ -1,96 +0,0 @@
1
- """TODO(wiki_qa): Add a description here."""
2
-
3
-
4
- import csv
5
- import os
6
-
7
- import datasets
8
-
9
-
10
- # TODO(wiki_qa): BibTeX citation
11
- _CITATION = """\
12
- @InProceedings{YangYihMeek:EMNLP2015:WikiQA,
13
- author = {{Yi}, Yang and {Wen-tau}, Yih and {Christopher} Meek},
14
- title = "{WikiQA: A Challenge Dataset for Open-Domain Question Answering}",
15
- journal = {Association for Computational Linguistics},
16
- year = 2015,
17
- doi = {10.18653/v1/D15-1237},
18
- pages = {2013–2018},
19
- }
20
- """
21
-
22
- # TODO(wiki_qa):
23
- _DESCRIPTION = """\
24
- Wiki Question Answering corpus from Microsoft
25
- """
26
-
27
- _DATA_URL = "https://download.microsoft.com/download/E/5/f/E5FCFCEE-7005-4814-853D-DAA7C66507E0/WikiQACorpus.zip" # 'https://www.microsoft.com/en-us/download/confirmation.aspx?id=52419'
28
-
29
-
30
- class WikiQa(datasets.GeneratorBasedBuilder):
31
- """TODO(wiki_qa): Short description of my dataset."""
32
-
33
- # TODO(wiki_qa): Set up version.
34
- VERSION = datasets.Version("0.1.0")
35
-
36
- def _info(self):
37
- # TODO(wiki_qa): Specifies the datasets.DatasetInfo object
38
- return datasets.DatasetInfo(
39
- # This is the description that will appear on the datasets page.
40
- description=_DESCRIPTION,
41
- # datasets.features.FeatureConnectors
42
- features=datasets.Features(
43
- {
44
- "question_id": datasets.Value("string"),
45
- "question": datasets.Value("string"),
46
- "document_title": datasets.Value("string"),
47
- "answer": datasets.Value("string"),
48
- "label": datasets.features.ClassLabel(num_classes=2),
49
- # These are the features of your dataset like images, labels ...
50
- }
51
- ),
52
- # If there's a common (input, target) tuple from the features,
53
- # specify them here. They'll be used if as_supervised=True in
54
- # builder.as_dataset.
55
- supervised_keys=None,
56
- # Homepage of the dataset for documentation
57
- homepage="https://www.microsoft.com/en-us/download/details.aspx?id=52419",
58
- citation=_CITATION,
59
- )
60
-
61
- def _split_generators(self, dl_manager):
62
- """Returns SplitGenerators."""
63
- # TODO(wiki_qa): Downloads the data and defines the splits
64
- # dl_manager is a datasets.download.DownloadManager that can be used to
65
- # download and extract URLs
66
- dl_dir = dl_manager.download_and_extract(_DATA_URL)
67
- dl_dir = os.path.join(dl_dir, "WikiQACorpus")
68
- # dl_dir = os.path.join(dl_dir, '')
69
- return [
70
- datasets.SplitGenerator(
71
- name=datasets.Split.TEST, gen_kwargs={"filepath": os.path.join(dl_dir, "WikiQA-test.tsv")}
72
- ),
73
- datasets.SplitGenerator(
74
- name=datasets.Split.VALIDATION, gen_kwargs={"filepath": os.path.join(dl_dir, "WikiQA-dev.tsv")}
75
- ),
76
- datasets.SplitGenerator(
77
- name=datasets.Split.TRAIN,
78
- # These kwargs will be passed to _generate_examples
79
- gen_kwargs={"filepath": os.path.join(dl_dir, "WikiQA-train.tsv")},
80
- ),
81
- ]
82
-
83
- def _generate_examples(self, filepath):
84
- """Yields examples."""
85
- # TODO(wiki_qa): Yields (key, example) tuples from the dataset
86
-
87
- with open(filepath, encoding="utf-8") as f:
88
- reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
89
- for idx, row in enumerate(reader):
90
- yield idx, {
91
- "question_id": row["QuestionID"],
92
- "question": row["Question"],
93
- "document_title": row["DocumentTitle"],
94
- "answer": row["Sentence"],
95
- "label": row["Label"],
96
- }