BiasFreeBench: a Benchmark for Mitigating Bias in Large Language Model Responses
Abstract
BiasFreeBench evaluates bias mitigation techniques in large language models using a unified benchmark and response-level metric to ensure fair and safe outputs in real-world scenarios.
Existing studies on bias mitigation methods for large language models (LLMs) use diverse baselines and metrics to evaluate debiasing performance, leading to inconsistent comparisons among them. Moreover, their evaluations are mostly based on the comparison between LLMs' probabilities of biased and unbiased contexts, which ignores the gap between such evaluations and real-world use cases where users interact with LLMs by reading model responses and expect fair and safe outputs rather than LLMs' probabilities. To enable consistent evaluation across debiasing methods and bridge this gap, we introduce BiasFreeBench, an empirical benchmark that comprehensively compares eight mainstream bias mitigation techniques (covering four prompting-based and four training-based methods) on two test scenarios (multi-choice QA and open-ended multi-turn QA) by reorganizing existing datasets into a unified query-response setting. We further introduce a response-level metric, Bias-Free Score, to measure the extent to which LLM responses are fair, safe, and anti-stereotypical. Debiasing performances are systematically compared and analyzed across key dimensions: the prompting vs. training paradigm, model size, and generalization of different training strategies to unseen bias types. We will publicly release our benchmark, aiming to establish a unified testbed for bias mitigation research.
Community
BiasFreeBench provides a unified testbed to evaluate diverse debiasing methods at the response level.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Open-DeBias: Toward Mitigating Open-Set Bias in Language Models (2025)
- Bias after Prompting: Persistent Discrimination in Large Language Models (2025)
- Mitigating Biases in Language Models via Bias Unlearning (2025)
- BiasGym: Fantastic LLM Biases and How to Find (and Remove) Them (2025)
- User-Assistant Bias in LLMs (2025)
- SDGO: Self-Discrimination-Guided Optimization for Consistent Safety in Large Language Models (2025)
- Benchmarking and Improving LLM Robustness for Personalized Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper