The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for Dataset Name
InductionBench is a new benchmarking suite designed to test the inductive reasoning abilities of large language models.
Dataset Details
Dataset Description
The benchmark is grounded in formal definitions of inductive function classes (e.g., regular functions/transducers, subregular hierarchies like input-strictly-local functions, Left-output-strictly-local functions, and Right-output-strictly-local functions).
These classes have been studied extensively in theoretical computer science, providing provable criteria—such as uniqueness of the optimal solution, search space complexity, and required data size.
Each datapoint contains 7 fields:
groundtruth: the unique minimum set of rules that generates the data
datasample: a set of input-output pairs generated by the ground truth rule set, and it is also a characteristic sample
input: input prompt that you can directly feed to models or API endpoints
type: the function class that the function represented by the ground truth rule set belongs to, it can be ISL, L-OSL, or R-OSL
k: the Markovian window size, it ranges from 2 to 4
vocab_size: ranges from 5 to 8
number_of_rules: ranges from 3 to 5
Here is an example datapoint belonging to ISL function class, with k = 2, vocab size = 5, and number of rules = 3.
groundtruth: {ag: h, gh: b, ge: d}
datasample:
'a': 'a', 'b': 'b', 'c': 'c', 'd': 'd', 'e': 'e', 'f': 'f', 'g': 'g', 'h': 'h',
'dc': 'dc', 'cb': 'cb', 'gd': 'gd', 'hh': 'hh', 'dh': 'dh', 'eh': 'eh',
'ef': 'ef', 'ah': 'ah', 'af': 'af', 'db': 'db', 'bb': 'bb', 'gf': 'gf',
'hb': 'hb', 'ga': 'ga', 'bh': 'bh', 'gg': 'gg', 'ac': 'ac', 'ba': 'ba',
'cc': 'cc', 'fb': 'fb', 'ae': 'ae', 'ed': 'ed', 'hf': 'hf', 'ab': 'ab',
'ca': 'ca', 'df': 'df', 'bf': 'bf', 'ch': 'ch', 'ha': 'ha', 'da': 'da',
'ee': 'ee', 'fd': 'fd', 'cg': 'cg', 'fe': 'fe', 'dd': 'dd', 'eg': 'eg',
'ff': 'ff', 'ad': 'ad', 'ge': 'gd', 'ec': 'ec', 'fh': 'fh', 'bd': 'bd',
'he': 'he', 'gc': 'gc', 'be': 'be', 'de': 'de', 'ce': 'ce', 'gh': 'gb',
'cf': 'cf', 'fg': 'fg', 'gb': 'gb', 'aa': 'aa', 'hc': 'hc', 'dg': 'dg',
'ea': 'ea', 'cd': 'cd', 'hg': 'hg', 'bc': 'bc', 'ag': 'ah', 'fc': 'fc',
'hd': 'hd', 'fa': 'fa', 'bg': 'bg', 'eb': 'eb',
'gbebd': 'gbebd', 'gca': 'gca', 'habgd': 'habgd', 'fhc': 'fhc', 'daccfe': 'daccfe',
'cahgef': 'cahgdf', 'ecah': 'ecah', 'bfaecd': 'bfaecd', 'fefc': 'fefc',
'eggbad': 'eggbad', 'fcddhg': 'fcddhg', 'gaggac': 'gahgac', 'bech': 'bech',
'haahf': 'haahf', 'hag': 'hah', 'gfadaa': 'gfadaa', 'bdbdee': 'bdbdee',
'cbhgba': 'cbhgba', 'hbe': 'hbe', 'ahh': 'ahh', 'gfcdge': 'gfcdgd',
'fbf': 'fbf', 'aaecc': 'aaecc', 'efgce': 'efgce', 'daecbe': 'daecbe', 'fegb': 'fegb',
'ffbh': 'ffbh', 'aefc': 'aefc', 'abge': 'abgd', 'hdgb': 'hdgb', 'dec': 'dec',
'dfbb': 'dfbb', 'ahdbhg': 'ahdbhg', 'dad': 'dad', 'cbdhg': 'cbdhg', 'cbh': 'cbh',
'hfhhd': 'hfhhd', 'dafff': 'dafff', 'cge': 'cgd', 'hbbabd': 'hbbabd', 'cch': 'cch',
'gab': 'gab', 'bgdegh': 'bgdegb', 'daac': 'daac', 'efb': 'efb', 'eegg': 'eegg', 'accd': 'accd',
'faa': 'faa', 'gchb': 'gchb', 'cfgahg': 'cfgahg', 'abhc': 'abhc'
}
- Curated by: Wenyue Hua
- Language(s) (NLP): English
Dataset Sources
- Repository: https://github.com/Wenyueh/inductive_reasoning_benchmark
- Paper: InductionBench: LLMs Fail in the Simplest Complexity Class
Curation Rationale
Synthetic Data Generation: Each sample (input-output pair) is generated using a controlled procedure for each subregular class. For instance, to test ISL (Input Strictly Local) functions, the data is constructed so that the output at a given position depends only on a limited, fixed window of the input. Similar strategies are used for L-OSL (Left-Output Strictly Local) and R-OSL (Right-Output Strictly Local).
Rationale: This ensures each dataset instance directly reflects the intended property of the function class. It also allows for parametric variation (e.g., altering the window size, length of strings, or complexity of transformations) so we can systematically test how LLMs handle different facets of inductive reasoning.
Ambiguity Control: In inductive reasoning, a dataset can sometimes be explained by multiple hypotheses. InductionBench tasks are constructed so that there is a single, best minimal description within the function class.
Recommendations
Use this benchmark if you want to challenge the reasoning ability of large language models.
Citation
@article{hua2025inductionbench, title={InductionBench: LLMs Fail in the Simplest Complexity Class}, author={Hua, Wenyue and Wong, Tyler and Fei, Sun and Pan, Liangming and Jardine, Adam and Wang, William Yang}, journal={arXiv preprint arXiv:2502.15823}, year={2025} }
Dataset Card Authors
Wenyue Hua
- Downloads last month
- 57