metadata
license: cc-by-sa-3.0
task_categories:
- question-answering
language:
- en
size_categories:
- 100K<n<1M
Adaptation of the Natural Question dataset by Google (available here).
We kept only questions with short answers, and only non-HTML Tokens, with additionnal cleaning of the text to remove unrelevant Wikipedia-specific stuff.
For example, Jump to : navigation, search
, ( edit )
, etc.
The intended use is to test your RAG pipeline on natural, open-ended question-answering tasks, with expected short-answers. In this case, evaluation metrics are mostly well defined. For example, one can use the evaluation metrics of the SQuAD 2.0 dataset.
Please note that the contents of the answers
key are lists, for consistency between the train (with unique answers) and validation sets (for which multiple answers are available).