Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -13,10 +13,16 @@ tags:
|
|
13 |
|
14 |
configs:
|
15 |
- config_name: core
|
16 |
-
description: Randomized updates (keys shuffled across key–value pairs). Recommended as the primary/SOTA comparison setting. At the highest stress tier, all tested models (as of
|
17 |
data_files:
|
18 |
- split: test
|
19 |
path: core.parquet
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
|
22 |
|
@@ -147,7 +153,7 @@ What is the current value (the last value) for key1 key2....key46?
|
|
147 |
|
148 |
|
149 |
**Result**
|
150 |
-
- In this mode, **most Modern LLMs still confuse the last value with earlier value after only 50–100 updates** (fewer than 12–25k tokens, far less than any LLMs' context window).
|
151 |
- Models quickly confuse earlier values with the most recent one.
|
152 |
- This is the **original and most simple test**
|
153 |
- Performance for this mode is also **reported in our paper (Figure 4).**
|
@@ -159,8 +165,8 @@ What is the current value (the last value) for key1 key2....key46?
|
|
159 |
This repository hosts the **PI-LLM** dataset.
|
160 |
Currently it includes two files:
|
161 |
|
162 |
-
- **core.parquet** → the
|
163 |
-
- **
|
164 |
|
165 |
|
166 |
## Quick Start - Evaluate Your Model
|
|
|
13 |
|
14 |
configs:
|
15 |
- config_name: core
|
16 |
+
description: Randomized updates (keys shuffled across key–value pairs). Recommended as the primary/SOTA comparison setting. At the highest stress tier, all tested models (as of May 2025) fail to reliably recover the final value.
|
17 |
data_files:
|
18 |
- split: test
|
19 |
path: core.parquet
|
20 |
+
|
21 |
+
- config_name: sequential_additional
|
22 |
+
description: Non-randomized – clear and strict sequential blocks; prove short context(token=5k-8k) can already have a strong context interference for most LLMs. Even with this well formatted data, many model's the performance still drop rapidly.
|
23 |
+
data_files:
|
24 |
+
- split: test
|
25 |
+
path: sequential_additional.parquet
|
26 |
|
27 |
|
28 |
|
|
|
153 |
|
154 |
|
155 |
**Result**
|
156 |
+
- In this mode, **most Modern LLMs (all <600B) still confuse the last value with earlier value after only 50–100 updates** (fewer than 12–25k tokens, far less than any LLMs' context window).
|
157 |
- Models quickly confuse earlier values with the most recent one.
|
158 |
- This is the **original and most simple test**
|
159 |
- Performance for this mode is also **reported in our paper (Figure 4).**
|
|
|
165 |
This repository hosts the **PI-LLM** dataset.
|
166 |
Currently it includes two files:
|
167 |
|
168 |
+
- **core.parquet** → Main dataset (randomized updates). Recommended as the primary/SOTA comparison setting; All tested models fail to reliably retrieve the last value.
|
169 |
+
- **sequential_additional.parquet** → Sequential mode (non-randomized, strict per-key ordered update blocks). Trivial for humans yet still challenging for many LLMs; smaller (all <600B) models are especially affected, with proactive-interference effects clearly exposed (even in short contexts, ~5–8k tokens).
|
170 |
|
171 |
|
172 |
## Quick Start - Evaluate Your Model
|