Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,11 @@
|
|
1 |
---
|
2 |
license: cc-by-sa-3.0
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
Adaptation of the Natural Question dataset by Google (available [here](google-research-datasets/natural_questions)).
|
@@ -7,4 +13,6 @@ We kept only questions with short answers, and only non-HTML Tokens, with additi
|
|
7 |
For example, `Jump to : navigation, search`, `( edit )`, etc.
|
8 |
|
9 |
The intended use is to test your RAG pipeline on natural, open-ended question-answering tasks, with expected short-answers.
|
10 |
-
In this case, evaluation metrics are *mostly* well defined. For example, one can use the evaluation metrics of the [SQuAD 2.0 dataset](https://rajpurkar.github.io/SQuAD-explorer/).
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-sa-3.0
|
3 |
+
task_categories:
|
4 |
+
- question-answering
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
size_categories:
|
8 |
+
- 100K<n<1M
|
9 |
---
|
10 |
|
11 |
Adaptation of the Natural Question dataset by Google (available [here](google-research-datasets/natural_questions)).
|
|
|
13 |
For example, `Jump to : navigation, search`, `( edit )`, etc.
|
14 |
|
15 |
The intended use is to test your RAG pipeline on natural, open-ended question-answering tasks, with expected short-answers.
|
16 |
+
In this case, evaluation metrics are *mostly* well defined. For example, one can use the evaluation metrics of the [SQuAD 2.0 dataset](https://rajpurkar.github.io/SQuAD-explorer/).
|
17 |
+
|
18 |
+
Please note that the contents of the `answers` key are lists, for consistency between the train (with unique answers) and validation sets (for which multiple answers are available).
|