Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,7 +8,6 @@ language:
|
|
| 8 |
|
| 9 |
## TL;DR
|
| 10 |
We identify a task that is **super easy for humans** but where all LLMs—from early 0.1B to the most modern 600B+ (GPT-5, Grok-4, Gemini, DeepSeek, etc.)—consistently **fail in the Same Way**. This pinpoints the **core challenge of MRCR**.
|
| 11 |
-
- Mechanistic research is ongoing. The test is well-established in cognitive science, where it has been studied extensively to measure human **Working Memory capacity**.
|
| 12 |
|
| 13 |
|
| 14 |
-Multi-round co-reference in Context Interference:
|
|
@@ -28,6 +27,9 @@ If MRCR is "multiple needles in a haystack", we show the **haystack isn't necess
|
|
| 28 |
|
| 29 |
- Our demo site: https://sites.google.com/view/cog4llm
|
| 30 |
- Our paper (ICML2025 Long-Context Workshop): https://arxiv.org/abs/2506.08184
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
|
| 33 |
|
|
@@ -101,7 +103,7 @@ Two sets of tests are provided, one fix update to 20 and another fixed update pe
|
|
| 101 |
|
| 102 |
(As this test is too hard, only 4 updates per key make all LLMs fail to retrieve the last value—which we intentionally designed to keep the searching difficulty low. Retrieve other order of value has even lower performance)
|
| 103 |
|
| 104 |
-
##
|
| 105 |
In a separated dataset files
|
| 106 |
This mode takes the exact format shown in this document, without randomization. We fix everything but vary only the update times just like in the above experiment, but turn randomize_mode off .(column: randomize_mode)
|
| 107 |
- This separate dataset consists of 46 of following blocks in a non-randomized order:
|
|
@@ -133,6 +135,15 @@ What is the current value (the last value) for key1 key2....key46?
|
|
| 133 |
- **This mode is the most striking, as it highlights a fundamental limitation in how LLMs process context—A task that is human infailable.”**
|
| 134 |
|
| 135 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 136 |
## Quick Start - Evaluate Your Model
|
| 137 |
|
| 138 |
```python
|
|
|
|
| 8 |
|
| 9 |
## TL;DR
|
| 10 |
We identify a task that is **super easy for humans** but where all LLMs—from early 0.1B to the most modern 600B+ (GPT-5, Grok-4, Gemini, DeepSeek, etc.)—consistently **fail in the Same Way**. This pinpoints the **core challenge of MRCR**.
|
|
|
|
| 11 |
|
| 12 |
|
| 13 |
-Multi-round co-reference in Context Interference:
|
|
|
|
| 27 |
|
| 28 |
- Our demo site: https://sites.google.com/view/cog4llm
|
| 29 |
- Our paper (ICML2025 Long-Context Workshop): https://arxiv.org/abs/2506.08184
|
| 30 |
+
- Mechanistic research is ongoing. The test is well-established in cognitive science, where it has been studied extensively to measure human **Working Memory capacity**.
|
| 31 |
+
|
| 32 |
+
|
| 33 |
|
| 34 |
|
| 35 |
|
|
|
|
| 103 |
|
| 104 |
(As this test is too hard, only 4 updates per key make all LLMs fail to retrieve the last value—which we intentionally designed to keep the searching difficulty low. Retrieve other order of value has even lower performance)
|
| 105 |
|
| 106 |
+
## Hard Mode / Non-Randomized Mode (Last but most interesting and striking)
|
| 107 |
In a separated dataset files
|
| 108 |
This mode takes the exact format shown in this document, without randomization. We fix everything but vary only the update times just like in the above experiment, but turn randomize_mode off .(column: randomize_mode)
|
| 109 |
- This separate dataset consists of 46 of following blocks in a non-randomized order:
|
|
|
|
| 135 |
- **This mode is the most striking, as it highlights a fundamental limitation in how LLMs process context—A task that is human infailable.”**
|
| 136 |
|
| 137 |
|
| 138 |
+
# PI-LLM Dataset File List
|
| 139 |
+
|
| 140 |
+
This repository hosts the **PI-LLM** dataset.
|
| 141 |
+
Currently it includes two files:
|
| 142 |
+
|
| 143 |
+
- **core.parquet** → the main dataset
|
| 144 |
+
- **hardmode_ordered.parquet** → harder for all LLMs but even easier for humans, with ordered update blocks.
|
| 145 |
+
|
| 146 |
+
|
| 147 |
## Quick Start - Evaluate Your Model
|
| 148 |
|
| 149 |
```python
|