Improve dataset card: Add task categories, tags, and sample usage
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,34 +1,55 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
*
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
* ``
|
| 27 |
-
* ``
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
* ``
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- question-answering
|
| 5 |
+
tags:
|
| 6 |
+
- long-context
|
| 7 |
+
- retrieval
|
| 8 |
+
- llm-evaluation
|
| 9 |
+
- benchmark
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# Difficult Long-context Retrieval Tasks
|
| 13 |
+
* π This is the Dataset used in the paper ["Hyper-multi-step: The Truth Behind Difficult Long-context Tasks"](https://arxiv.org/abs/2410.04422)
|
| 14 |
+
* π» [GitHub Repository](https://github.com/yuyijiong/hard_retrieval_for_llm)
|
| 15 |
+
|
| 16 |
+
This dataset is designed to evaluate the performance of Long-Context Language Models (LCLMs) on challenging retrieval tasks. While LCLMs are characterized by their extensive context windows, many long-context benchmarks present tasks that even the most advanced models struggle to complete. Our research indicates that the difficulty of these tasks primarily stems from two basic issues: "multi-matching retrieval," which requires the simultaneous retrieval of multiple items, and "logic-based retrieval," which necessitates logical judgment within retrieval criteria. These two problems, while seemingly straightforward, are proven to be hyper-multi-step in nature, explaining why LCLMs struggle with more advanced long-context tasks.
|
| 17 |
+
|
| 18 |
+
The tasks we provide are:
|
| 19 |
+
|
| 20 |
+
π Simple tasks which are easy for Long-Context LMs:
|
| 21 |
+
* ``simple_k2v``: Direct key-to-value retrieval. The key is given and the model needs to retrieve the corresponding value.
|
| 22 |
+
* ``simple_v2k``: Direct value-to-key retrieval. The value is given and the model needs to retrieve the corresponding key.
|
| 23 |
+
* ``multi_step(kv)``: multi-step (formal) KV retrieval. The model needs to retrieve multiple values with multiple queries. Then concatenate the values to form a new key, and finally retrieve the corresponding value.
|
| 24 |
+
|
| 25 |
+
π΅ Difficult tasks which are nearly unsolvable for Long-Context LMs:
|
| 26 |
+
* ``logic(kv)``: logic-based KV retrieval. All the values are in range 0-9. We give the range of the value and the model needs to retrieve the corresponding key.
|
| 27 |
+
* ``logic(resume)``: logic-based student resume retrieval. We give the range of the GPA and the model needs to retrieve the corresponding student whose GPA is in the range.
|
| 28 |
+
* ``multi_match(kv)``: multi-match KV retrieval. The value is given and the model needs to retrieve multiple corresponding keys.
|
| 29 |
+
* ``multi_match(resume)``: multi-match student resume retrieval. A university name is given and the model needs to retrieve multiple corresponding students who are from this university.
|
| 30 |
+
* ``multi_match_last(kv)``: multi-match KV retrieval. The value is given and the model needs to retrieve multiple corresponding keys. The other gold keys are already given in the prompt, except the last one.
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
## The meaning of file names
|
| 34 |
+
For example:
|
| 35 |
+
* ``logic_kv_10`` means logic-based KV retrieval task with the context containing 10 KV items.
|
| 36 |
+
* ``3_match_resume_100`` means multi-match student resume retrieval task with the context containing 100 students and the model needs to retrieve 3 students.
|
| 37 |
+
* ``concat_3_kv_100_cot`` means multi-step KV retrieval task with the context containing 100 KV items and the model needs to concatenate 3 values retrieved with 3 queries. And the prompt style is Chain-of-Thought (CoT).
|
| 38 |
+
|
| 39 |
+
## Columns in the dataset
|
| 40 |
+
* ``prompt``: the full prompt of the task
|
| 41 |
+
* ``gold_keys``: the gold keys of the KV retrieval task. It's a string if there is only one gold key, otherwise it's a list of strings. In student resume retrieval, it's the student name (or a list of student names).
|
| 42 |
+
* ``gold_values``: the gold values of the KV retrieval task. It's a string if there is only one gold value, otherwise it's a list of strings. In student resume retrieval, it's the student's GPA or University (or a list of them).
|
| 43 |
+
|
| 44 |
+
Note that, in logic-based retrieval and multi-match retrieval tasks, ``gold_keys`` are actually the answer to the prompt.
|
| 45 |
+
|
| 46 |
+
## Sample Usage
|
| 47 |
+
|
| 48 |
+
You can use the `evaluate.py` script from the [GitHub repository](https://github.com/yuyijiong/hard_retrieval_for_llm) to test the performance of LLMs on these difficult retrieval tasks or other retrieval tasks. You should directly modify the code in `evaluate.py` to choose different tasks, models, and prompt types.
|
| 49 |
+
|
| 50 |
+
The prompt styles provided are:
|
| 51 |
+
* `None`: default prompt, lets the model give the answer directly.
|
| 52 |
+
* `"cot"`: adds a Chain-of-Thought (CoT) prompt, guiding the model to 'think step by step'.
|
| 53 |
+
* `"one-by-one"`: adds a one-by-one prompt, guiding the model to 'examine every item one by one'.
|
| 54 |
+
|
| 55 |
+
For more detailed usage instructions, including hidden states linear probing and attention analysis, please refer to the [GitHub repository](https://github.com/yuyijiong/hard_retrieval_for_llm).
|