Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -107,7 +107,7 @@ ALL tested SOTA LLMs **cannot reliably retrieve** Value_N. Distribution spans va
|
|
107 |
|
108 |
## On Randomization
|
109 |
We **RANDOMIZE** update order after generation to mimic unpredictable changes by interleaving updates across different keys (i.e., different keys’ updates occur back-to-back rather than in contiguous blocks). Counterintuitively, this often helps LLMs, since the final update usually lands near the end of the context. In the sequential setting, most smaller (less than ~600B) models lose track after only a few updates—even with 5–8k-token inputs.
|
110 |
-
See the **
|
111 |
|
112 |
|
113 |
|
|
|
107 |
|
108 |
## On Randomization
|
109 |
We **RANDOMIZE** update order after generation to mimic unpredictable changes by interleaving updates across different keys (i.e., different keys’ updates occur back-to-back rather than in contiguous blocks). Counterintuitively, this often helps LLMs, since the final update usually lands near the end of the context. In the sequential setting, most smaller (less than ~600B) models lose track after only a few updates—even with 5–8k-token inputs.
|
110 |
+
See the **Sequential /Original-Non-Random Mode** section at the end of this document, where many LLMs’ performance still **collapses** with only a **small amount of input (5–8k)**
|
111 |
|
112 |
|
113 |
|