The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: RuntimeError Message: Dataset scripts are no longer supported, but found biasneutral.py Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names dataset_module = dataset_module_factory( ^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 989, in dataset_module_factory raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}") RuntimeError: Dataset scripts are no longer supported, but found biasneutral.py
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for Dataset Name
This dataset is a filtered version of BookCorpus containing only bias-neutral words (for biases gender, race, and religion).
NOTE: This dataset is derived from BookCorpus, for which we do not have publication rights. Therefore, this repository only provides indices referring to gender-neutral entries within the BookCorpus dataset on Hugging Face. By using
load_dataset('aieng-lab/biasneutral', trust_remote_code=True)
, both the indices and the full BookCorpus dataset are downloaded locally. The indices are then used to construct the BIASNEUTRAL dataset. The initial dataset generation takes a few minutes, but subsequent loads are cached for faster access. This setup only works withdatasets
versions beween3.0.2
and3.6.0
(successfully tested with Python 3.9-3.11), as later versions deprecatedtrust_remote_code
. If your project depends on otherdatasets
versions, we recommend generating the dataset once in a separate Python environment, saving the resulting data to disk, and then loading the local copy within your main project.
biasneutral = load_dataset('aieng-lab/biasneutral', trust_remote_code=True, split='train')
Examples:
Index | Text |
---|---|
8498 | no one sitting near me could tell that i was seething with rage . |
8500 | by now everyone knew we were an item , the thirty-five year old business mogul , and the twenty -three year old pop sensation . |
8501 | we 'd been able to keep our affair hidden for all of two months and that only because of my high security . |
8503 | i was n't too worried about it , i just do n't like my personal life splashed across the headlines , but i guess it came with the territory . |
8507 | i 'd sat there prepared to be bored out of my mind for the next two hours or so . |
8508 | i 've seen and had my fair share of models over the years , and they no longer appealed . |
8512 | when i finally looked up at the stage , my breath had got caught in my lungs . |
8516 | i pulled my phone and cancelled my dinner date and essentially ended the six-month relationship i 'd been barely having with another woman . |
8518 | when i see something that i want , i go after it . |
8529 | if i had anything to say about that , it would be a permanent thing , or until i 'd had my fill at least . |
Dataset Details
- Repository: github.com/aieng-lab/gradiend
- Paper:
- Original Data: BookCorpus
Uses
This dataset is suitable for training and evaluating language models. For example, its lack of gender-related words makes it ideal for assessing language modeling capabilities in both gender-biased and gender-neutral models during masked language modeling (MLM) tasks, allowing for an evaluation independent of gender bias (same holds for race and religion bias).
Dataset Creation
We generated this dataset by filtering the BookCorpus dataset, leaving only entries matching the following criteria:
- Each entry contains at least 50 characters
- No name of aieng-lab/namextend
- No gender-specific pronoun is contained (he/she/him/her/his/hers/himself/herself)
- No gender-specific noun is contained according to the 2421 plural-extended entries of this gendered-word dataset
- No Bias Attribute Words (gender, race, religion) as defined by Maede et al. (2022).
Citation
This dataset has been originally introduced as part of GRADIEND, though it has not been used in the final version of the paper (but e.g., in version v2), as it was generalized to a more general bias neutral dataset BIASNEUTRAL.
BibTeX:
@misc{drechsel2025gradiendfeaturelearning,
title={{GRADIEND}: Feature Learning within Neural Networks Exemplified through Biases},
author={Jonathan Drechsel and Steffen Herbold},
year={2025},
eprint={2502.01406},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.01406},
}
Dataset Card Authors
- Downloads last month
- 16