giantfish-fly commited on
Commit
d06fb59
·
verified ·
1 Parent(s): 9bee215

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -14
README.md CHANGED
@@ -13,16 +13,16 @@ tags:
13
 
14
  configs:
15
  - config_name: core
16
- description: Randomized (easier) – keys shuffled across groups to reduce interference; recommended for SOTA model comparison.
17
  data_files:
18
  - split: test
19
  path: core.parquet
20
 
21
- - config_name: hardmode_ordered
22
- description: Non-randomized (harder) – strict sequential blocks; prove short context(token=3k-8k) can already have very strong context interference, best for stress tests and mechanistic analysis.
23
  data_files:
24
  - split: test
25
- path: hardmode_ordered.parquet
26
  ---
27
  ---
28
  # PI-LLM: The Core Retrieval Challenge Behind MRCR
@@ -87,8 +87,7 @@ The current value of Key1 is Value_N.
87
  LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
88
 
89
 
90
- **Note**: We **randomize** the sequence of the 46 key groups in the dataset to **LOWER** the difficulty! (Yes, **this adjustment significantly reduces** difficulty—see findings below. In the original non-randomized test, even the most powerful LLMs lost track after only a few updates.)(total input 5–8k tokens only)
91
-
92
 
93
  ## Why this is challenging for LLMs:
94
  - Multiple co-references to the same key cause strong interference.
@@ -96,7 +95,7 @@ LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, an
96
 
97
  1. As the number of updates per key (N) increases, LLMs confuse earlier values with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
98
  2. We intentionally make the task to only retrieve the last value to keep searching difficulties low and to show all LLM are unable to keep track due to **context interference**.
99
- 3. See the **Hard/Original-Non-Random Mode** section at the end of this document, where all LLMs’ performance **collapses** with only a **small amount of input (5–8k)**
100
 
101
  ## Cognitive science connection: Proactive Interference (PI)
102
  Our test adopts the **classic proactive** interference paradigm from cognitive science, a **foundational method** for studying **human working memory**. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
@@ -125,7 +124,7 @@ Two sets of tests are provided, one fix update to 20 and another fixed update pe
125
 
126
  (As this test is too hard, only 4 updates per key make all LLMs fail to retrieve the last value—which we intentionally designed to keep the searching difficulty low. Retrieve other order of value has even lower performance)
127
 
128
- ## One more things: Hard Mode / Non-Randomized Mode (Last but most interesting and striking)
129
  In a separated dataset files (Dataset column: extra_exp_updates_randomoff)
130
  This mode takes the exact format shown in this document, without randomization. We fix everything but vary only the update times just like in the above experiment, but turn randomize_mode off .(column: randomize_mode)
131
  - This separate dataset consists of 46 of following blocks in a non-randomized order:
@@ -151,13 +150,12 @@ What is the current value (the last value) for key1 key2....key46?
151
 
152
 
153
  **Result**
154
- - In this mode, **SOTA LLMs confuse the last value with earlier value after only 50–100 updates** (fewer than 12–25k tokens, far less than any LLMs' context window).
155
- - All models quickly confuse earlier values with the most recent one.
156
- - This is the **original and most striking test**, but we present it separately since performance declines too quickly to allow meaningful ranking across models.
157
  - Performance for this mode is also **reported in our paper (Figure 4).**
158
  - **Step-like failure pattern** in this sequential key–value update tests. Retrieval accuracy remains near-perfect as interfering information is added in strictly sequential order, until a model-specific threshold is reached—after which **performance drops rapidly to near-zero**.
159
- - **This mode is the most striking, as it highlights a fundamental limitation in how LLMs process context—A task that is human infailable.”**
160
-
161
 
162
  # PI-LLM Dataset File List
163
 
@@ -165,7 +163,7 @@ This repository hosts the **PI-LLM** dataset.
165
  Currently it includes two files:
166
 
167
  - **core.parquet** → the main dataset
168
- - **hardmode_ordered.parquet** → harder for all LLMs but even easier for humans, with ordered update blocks.
169
 
170
 
171
  ## Quick Start - Evaluate Your Model
 
13
 
14
  configs:
15
  - config_name: core
16
+ description: Randomized updates (keys shuffled across key–value pairs). Recommended as the primary/SOTA comparison setting. At the highest stress tier, all tested models (as of Ma 2025) fail to reliably recover the final value.
17
  data_files:
18
  - split: test
19
  path: core.parquet
20
 
21
+ - config_name: additional_sequential
22
+ description: Non-randomized – clear and strict sequential blocks; prove short context(token=5k-8k) can already have a strong context interference for most LLMs. Even with this well formatted data, many model's the performance still drop rapidly.
23
  data_files:
24
  - split: test
25
+ path: additional_sequential.parquet
26
  ---
27
  ---
28
  # PI-LLM: The Core Retrieval Challenge Behind MRCR
 
87
  LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
88
 
89
 
90
+ **Note**:We **randomize** update order to mimic unpredictable changes. Counterintuitively, this often helps LLMs, since the final update usually lands near the end of the context; And in the sequential setting, most models lose track after only a few updates—even with 5–8k-token inputs.(sequential mode dataset provided separately)
 
91
 
92
  ## Why this is challenging for LLMs:
93
  - Multiple co-references to the same key cause strong interference.
 
95
 
96
  1. As the number of updates per key (N) increases, LLMs confuse earlier values with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
97
  2. We intentionally make the task to only retrieve the last value to keep searching difficulties low and to show all LLM are unable to keep track due to **context interference**.
98
+ 3. See the **Sequntial /Original-Non-Random Mode** section at the end of this document, where many LLMs’ performance still **collapses** with only a **small amount of input (5–8k)**
99
 
100
  ## Cognitive science connection: Proactive Interference (PI)
101
  Our test adopts the **classic proactive** interference paradigm from cognitive science, a **foundational method** for studying **human working memory**. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
 
124
 
125
  (As this test is too hard, only 4 updates per key make all LLMs fail to retrieve the last value—which we intentionally designed to keep the searching difficulty low. Retrieve other order of value has even lower performance)
126
 
127
+ ## One more things: Sequential / Non-Randomized Mode (Last but interesting)
128
  In a separated dataset files (Dataset column: extra_exp_updates_randomoff)
129
  This mode takes the exact format shown in this document, without randomization. We fix everything but vary only the update times just like in the above experiment, but turn randomize_mode off .(column: randomize_mode)
130
  - This separate dataset consists of 46 of following blocks in a non-randomized order:
 
150
 
151
 
152
  **Result**
153
+ - In this mode, **most Modern LLMs still confuse the last value with earlier value after only 50–100 updates** (fewer than 12–25k tokens, far less than any LLMs' context window).
154
+ - Models quickly confuse earlier values with the most recent one.
155
+ - This is the **original and most simple test**
156
  - Performance for this mode is also **reported in our paper (Figure 4).**
157
  - **Step-like failure pattern** in this sequential key–value update tests. Retrieval accuracy remains near-perfect as interfering information is added in strictly sequential order, until a model-specific threshold is reached—after which **performance drops rapidly to near-zero**.
158
+ -
 
159
 
160
  # PI-LLM Dataset File List
161
 
 
163
  Currently it includes two files:
164
 
165
  - **core.parquet** → the main dataset
166
+ - **additional_sequential.parquet** → esasy/sequntial mode, still hard for many LLMs but super easy for humans, with just ordered update blocks.
167
 
168
 
169
  ## Quick Start - Evaluate Your Model