Datasets:
dataset_info: | |
features: | |
- name: idx | |
dtype: int64 | |
- name: names | |
dtype: string | |
- name: parallel_chain | |
dtype: string | |
- name: parallel_total_val | |
dtype: float64 | |
- name: parallel_lastname | |
dtype: string | |
- name: parallel_single_val | |
dtype: float64 | |
- name: forward_chain | |
dtype: string | |
- name: forward_total_val | |
dtype: float64 | |
- name: forward_lastname | |
dtype: string | |
- name: forward_single_val | |
dtype: float64 | |
- name: backward_chain | |
dtype: string | |
- name: backward_total_val | |
dtype: float64 | |
- name: backward_lastname | |
dtype: string | |
- name: backward_single_val | |
dtype: float64 | |
- name: chaotic_chain | |
dtype: string | |
- name: chaotic_total_val | |
dtype: float64 | |
- name: chaotic_lastname | |
dtype: string | |
- name: chaotic_single_val | |
dtype: float64 | |
splits: | |
- name: k5 | |
num_bytes: 178184 | |
num_examples: 200 | |
- name: k10 | |
num_bytes: 333938 | |
num_examples: 200 | |
- name: k20 | |
num_bytes: 647136 | |
num_examples: 200 | |
- name: k50 | |
num_bytes: 1582289 | |
num_examples: 200 | |
- name: k100 | |
num_bytes: 3142590 | |
num_examples: 200 | |
- name: k200 | |
num_bytes: 6266799 | |
num_examples: 200 | |
download_size: 4072876 | |
dataset_size: 12150936 | |
configs: | |
- config_name: default | |
data_files: | |
- split: k5 | |
path: data/k5-* | |
- split: k10 | |
path: data/k10-* | |
- split: k20 | |
path: data/k20-* | |
- split: k50 | |
path: data/k50-* | |
- split: k100 | |
path: data/k100-* | |
- split: k200 | |
path: data/k200-* | |
task_categories: | |
- question-answering | |
language: | |
- en | |
tags: | |
- llm-evaluation | |
- long-context | |
- reasoning | |
- benchmark | |
## NeedleChain: Measuring Intact Long-Context Reasoning Capability of Large Language Models | |
<p align="center"> | |
Github: <a href="https://github.com/hyeonseokk/NeedleChain"> Official github repository </a> | |
<br> | |
Paper: <a href="https://arxiv.org/abs/2507.22411"> Official Paper </a> | |
<br> | |
--- | |
<p align="center"> | |
<img src="needlechain.png" width="500"/> | |
</p> | |
NeedleChain is a benchmark designed to evaluate LLMs' intact long-context understanding. | |
Every provided context consists of query-relevant information, requiring a comprehensive understanding to answer the given query. | |
--- | |
For manual creation of NeedleChain datasets, please refer to our official github repository. |