The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: UnicodeDecodeError Message: 'utf-8' codec can't decode byte 0xa8 in position 1827: invalid start byte Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 271, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2266, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 302, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 188, in _generate_tables csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 75, in wrapper return function(*args, download_config=download_config, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1213, in xpandas_read_csv return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv return _read(filepath_or_buffer, kwds) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__ self._engine = self._make_engine(f, self.engine) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine return mapping[engine](f, **self.options) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__ self._reader = parsers.TextReader(src, **kwds) File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__ File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1827: invalid start byte
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
BRU Dataset: Balancing Rigor and Utility for Testing Cognitive Biases in LLMs
π§ This dataset accompanies our paper "Balancing Rigor and Utility: Mitigating Cognitive Biases in Large Language Models for Multiple-Choice Questions", accepted at CogSci 2025.
π About the Dataset
The BRU dataset includes 205 multiple-choice questions, each crafted to assess how LLMs handle well-known cognitive biases. Unlike widely used datasets such as MMLU, TruthfulQA, and PIQA, BRU offers comprehensive coverage of cognitive distortions, rather than focusing solely on factual correctness or reasoning.
The dataset was developed through a multidisciplinary collaboration:
- An experienced psychologist designed the bias scenarios.
- A medical data expert ensured content validity.
- Two NLP researchers formatted the dataset for LLM evaluation.
Each question is backed by references to psychological literature and frameworks, with full documentation in the paper's appendix.
β Covered Bias Categories
The dataset includes questions targeting the following eight types of cognitive biases:
- Anchoring Bias
- Base Rate Fallacy
- Conjunction Fallacy
- Gamblerβs Fallacy
- Insensitivity to Sample Size
- Overconfidence Bias
- Regression Fallacy
- Sunk Cost Fallacy
π Dataset Format
Each .csv
file in this repository corresponds to one bias type. All files follow the same format:
Question ID | Question Text | Ground Truth Answer |
---|---|---|
1 | (MCQ content) | A |
2 | (MCQ content) | C |
... | ... | ... |
- First row: column headers
- First column: question number
- Second column: question content (includes options)
- Third column: correct answer label (e.g., A, B, C, D)
π Citation
If you use the BRU dataset in your research, please cite our paper:
@article{wang2024balancingrigorutilitymitigating,
title={Balancing Rigor and Utility: Mitigating Cognitive Biases in Large Language Models for Multiple-Choice Questions},
author={Liman Wang and Hanyang Zhong and Wenting Cao and Zeyuan Sun},
year={2024},
eprint={2406.10999},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.10999},
}
- Downloads last month
- 29