Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -71,7 +71,7 @@ LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, an
|
|
71 |
|
72 |
|
73 |
1. As the number of updates per key (N) increases, LLMs confuse earlier values with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
|
74 |
-
2. We intentionally make the task to only retrieve the last value to keep
|
75 |
3. See the **Hard/Original-Non-Random Mode** section at the end of this document, where all LLMs’ performance **collapses** with only a **small amount of input (5–8k)**
|
76 |
|
77 |
## Cognitive science connection: Proactive Interference (PI)
|
@@ -86,22 +86,22 @@ See: https://sites.google.com/view/cog4llm
|
|
86 |
- LLMs: accuracy declines approximately log-linearly with the number of updates per key and with the number of concurrent update blocks (details, plots, and model list in our paper).
|
87 |
|
88 |
|
89 |
-
## Full
|
90 |
-
This dataset consists 2 additional dimensions of evaluation to show current LLMs' limits. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc
|
91 |
|
92 |
- Experiment2. (Dataset column: exp_keys).
|
93 |
LLMs's capacity to resist interference and their accuracy to retrieve the last value decrease log-linearly as the number of concurrent keys(n_keys) grows.
|
94 |
-
This experiment fixes everything else
|
95 |
|
96 |
|
97 |
- Experiment3. (Dataset column: exp_valuelength).— This causes rapid decline across LLMs (GPT-5 and Grok-4 decline similarly to GPT-2).”
|
98 |
Retrieval accuracy also decreases log-linearly as value length grows.
|
99 |
This experiment fixes everything else, and vary only the value_length.
|
100 |
-
Two sets of tests are provided, one fix update to 20 and another fixed update per key to only 4 as
|
101 |
|
102 |
(As this test is too hard, only 4 updates per key make all LLMs fail to retrieve the last value—which we intentionally designed to keep the searching difficulty low. Retrieve other order of value has even lower performance)
|
103 |
|
104 |
-
## Hard Mode / Non-Randomized Mode
|
105 |
|
106 |
This mode takes the exact format shown in this document, without randomization. We fix everything but vary only the update times.
|
107 |
- This separate dataset consists of 46 of following blocks in a non-randomized order:
|
@@ -118,7 +118,7 @@ Key2: Value_2
|
|
118 |
......
|
119 |
Key2: Value_N
|
120 |
|
121 |
-
|
122 |
|
123 |
Question:
|
124 |
|
@@ -130,8 +130,9 @@ What is the current value (the last value) for key1 key2....key46?
|
|
130 |
- All models quickly confuse earlier values with the most recent one.
|
131 |
- This is the **original and most striking test**, but we present it separately since performance declines too quickly to allow meaningful ranking across models.
|
132 |
- Performance for this mode is also **reported in our paper (Figure 4).**
|
133 |
-
- **This mode is the most striking, as it highlights a fundamental limitation in how LLMs process context—A task that is human
|
134 |
-
|
|
|
135 |
## Quick Start - Evaluate Your Model
|
136 |
|
137 |
```python
|
@@ -336,7 +337,6 @@ Jiaqiu Vince Sun*
|
|
336 |
PhD Candidate, NYU Center for Neuroscience
|
337 |
|
338 |
A former professional architect turned neuroscientist, Jiaqiu draws on his background in spatial design, cognitive neuroscience, and philosophy of mind to investigate how memory emerges and diverges in brains and artificial systems. His primary focus lies in the higher-level functions of the brain, such as self-monitoring and control.
|
339 |
-
|
340 | |
341 |
|
342 |
|
|
|
71 |
|
72 |
|
73 |
1. As the number of updates per key (N) increases, LLMs confuse earlier values with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
|
74 |
+
2. We intentionally make the task to only retrieve the last value to keep searching difficulties low and to show all LLM are unable to keep track due to **context interference**.
|
75 |
3. See the **Hard/Original-Non-Random Mode** section at the end of this document, where all LLMs’ performance **collapses** with only a **small amount of input (5–8k)**
|
76 |
|
77 |
## Cognitive science connection: Proactive Interference (PI)
|
|
|
86 |
- LLMs: accuracy declines approximately log-linearly with the number of updates per key and with the number of concurrent update blocks (details, plots, and model list in our paper).
|
87 |
|
88 |
|
89 |
+
## Full detail of 3 tests
|
90 |
+
This dataset consists of 2 additional dimensions of evaluation to show current LLMs' limits. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc
|
91 |
|
92 |
- Experiment2. (Dataset column: exp_keys).
|
93 |
LLMs's capacity to resist interference and their accuracy to retrieve the last value decrease log-linearly as the number of concurrent keys(n_keys) grows.
|
94 |
+
This experiment fixes everything else and vary only n_keys. (Two sets of test are provided, one fix update to 350 and another fixed update to 125 as lower difficulty settings)
|
95 |
|
96 |
|
97 |
- Experiment3. (Dataset column: exp_valuelength).— This causes rapid decline across LLMs (GPT-5 and Grok-4 decline similarly to GPT-2).”
|
98 |
Retrieval accuracy also decreases log-linearly as value length grows.
|
99 |
This experiment fixes everything else, and vary only the value_length.
|
100 |
+
Two sets of tests are provided, one fix update to 20 and another fixed update per key to only 4 as low- difficulty settings
|
101 |
|
102 |
(As this test is too hard, only 4 updates per key make all LLMs fail to retrieve the last value—which we intentionally designed to keep the searching difficulty low. Retrieve other order of value has even lower performance)
|
103 |
|
104 |
+
## Hard Mode / Non-Randomized Mode (Last but most interesting and striking)
|
105 |
|
106 |
This mode takes the exact format shown in this document, without randomization. We fix everything but vary only the update times.
|
107 |
- This separate dataset consists of 46 of following blocks in a non-randomized order:
|
|
|
118 |
......
|
119 |
Key2: Value_N
|
120 |
|
121 |
+
....all the way to key46 block
|
122 |
|
123 |
Question:
|
124 |
|
|
|
130 |
- All models quickly confuse earlier values with the most recent one.
|
131 |
- This is the **original and most striking test**, but we present it separately since performance declines too quickly to allow meaningful ranking across models.
|
132 |
- Performance for this mode is also **reported in our paper (Figure 4).**
|
133 |
+
- **This mode is the most striking, as it highlights a fundamental limitation in how LLMs process context—A task that is human infailable.”**
|
134 |
+
|
135 |
+
|
136 |
## Quick Start - Evaluate Your Model
|
137 |
|
138 |
```python
|
|
|
337 |
PhD Candidate, NYU Center for Neuroscience
|
338 |
|
339 |
A former professional architect turned neuroscientist, Jiaqiu draws on his background in spatial design, cognitive neuroscience, and philosophy of mind to investigate how memory emerges and diverges in brains and artificial systems. His primary focus lies in the higher-level functions of the brain, such as self-monitoring and control.
|
|
|
340 | |
341 |
|
342 |
|