giantfish-fly commited on
Commit
c235766
·
verified ·
1 Parent(s): a37835f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -10
README.md CHANGED
@@ -28,12 +28,13 @@ configs:
28
 
29
  ---
30
  ---
31
- # PI-LLM: The Core Retrieval Challenge Behind MRCR
32
  (ICML 2025 Long-Context Foundation Models Workshop Accepted)
33
  - a simple context interference evaluation.
34
 
 
35
  ## TL;DR
36
- We identify a task that is **super easy for humans** but where all LLMs—from early 0.1B to the most modern 600B+ (GPT-5, Grok-4, Gemini, DeepSeek, etc.)—consistently **fail in the Same Way**. This pinpoints the **core challenge of MRCR**.
37
 
38
 
39
  -Multi-round co-reference in Context Interference:
@@ -83,23 +84,28 @@ The current value of Key1 is Value_N.
83
  ```
84
 
85
 
86
- ## Note on dataset scale:
87
- (N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.
88
 
89
 
90
- **Results:**
91
- LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
 
92
 
93
 
94
- **Note**:We **randomize** update order to mimic unpredictable changes. Counterintuitively, this often helps LLMs, since the final update usually lands near the end of the context; And in the sequential setting, most models lose track after only a few updates—even with 5–8k-token inputs.(sequential mode dataset provided separately)
95
 
96
  ## Why this is challenging for LLMs:
97
  - Multiple co-references to the same key cause strong interference.
98
 
99
-
100
- 1. As the number of updates per key (N) increases, LLMs confuse earlier values with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
101
  2. We intentionally make the task to only retrieve the last value to keep searching difficulties low and to show all LLM are unable to keep track due to **context interference**.
102
- 3. See the **Sequntial /Original-Non-Random Mode** section at the end of this document, where many LLMs’ performance still **collapses** with only a **small amount of input (5–8k)**
 
 
 
 
 
 
103
 
104
  ## Cognitive science connection: Proactive Interference (PI)
105
  Our test adopts the **classic proactive** interference paradigm from cognitive science, a **foundational method** for studying **human working memory**. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
 
28
 
29
  ---
30
  ---
31
+ # PI-LLM Bench: The Core Retrieval Challenge Behind MRCR
32
  (ICML 2025 Long-Context Foundation Models Workshop Accepted)
33
  - a simple context interference evaluation.
34
 
35
+
36
  ## TL;DR
37
+ We identify a task that is **super easy for humans** but where all LLMs—from early 0.1B to the most modern 600B+ (GPT-5, Grok-4, Gemini, DeepSeek, etc.)—consistently **fail in the Same Way**. This pinpoints the **core challenge of MRCR**
38
 
39
 
40
  -Multi-round co-reference in Context Interference:
 
84
  ```
85
 
86
 
87
+ ## Results:
88
+ LLMs **cannot reliably retrieve** Value_N. Distribution spans value_1 to value_N, and **as N increases**, the **answers skew** increasingly toward **value_1**.
89
 
90
 
91
+
92
+ ## Note on dataset scale:
93
+ (N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.
94
 
95
 
 
96
 
97
  ## Why this is challenging for LLMs:
98
  - Multiple co-references to the same key cause strong interference.
99
 
100
+ 1. As the number of updates per key (N) increases, LLMs **confuse earlier values** with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
 
101
  2. We intentionally make the task to only retrieve the last value to keep searching difficulties low and to show all LLM are unable to keep track due to **context interference**.
102
+
103
+
104
+ ## On Randomization
105
+ We **RANDOMIZE** update order after generation to mimic unpredictable changes by interleaving updates across different keys (i.e., different keys’ updates occur back-to-back rather than in contiguous blocks). Counterintuitively, this often helps LLMs, since the final update usually lands near the end of the context. In the sequential setting, most smaller (less than ~600B) models lose track after only a few updates—even with 5–8k-token inputs.
106
+ See the **Sequntial /Original-Non-Random Mode** section at the end of this document, where many LLMs’ performance still **collapses** with only a **small amount of input (5–8k)**
107
+
108
+
109
 
110
  ## Cognitive science connection: Proactive Interference (PI)
111
  Our test adopts the **classic proactive** interference paradigm from cognitive science, a **foundational method** for studying **human working memory**. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.