File size: 2,615 Bytes
0ec1013 eaa31e4 420e5eb eaa31e4 0ec1013 f5b3324 0ec1013 f5b3324 0ec1013 f5b3324 0ec1013 f5b3324 0ec1013 eaa31e4 0ec1013 202e832 0ec1013 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
language:
- en
license: cc-by-sa-3.0
size_categories:
- 1K<n<10K
task_categories:
- summarization
tags:
- biomedical
- health
- NLP
- summarization
- LLM
- factuality
---
PlainFact is a high-quality human-annotated dataset with fine-grained explanation (i.e., added information) annotations designed for Plain Language Summarization tasks, along with [PlainQAFact](https://github.com/zhiwenyou103/PlainQAFact) factuality evaluation framework. It is collected from the [Cochrane database](https://www.cochranelibrary.com/) sampled from CELLS dataset ([Guo et al., 2024](https://doi.org/10.1016/j.jbi.2023.104580)).
PlainFact is a sentence-level benchmark that splits the summaries into sentences with fine-grained explanation annotations. In total, we have 200 plain language summary-abstract pairs (2,740 sentences).
In addition to all factual plain language sentences, we also generate contrasting non-factual examples for each plain language sentence. These contrasting examples are perturbed using GPT-4o, following the perturbation criteria for faithfulness introduced in APPLS ([Guo et al., 2024](https://aclanthology.org/2024.emnlp-main.519/)).
> Currently, we only released the annotation for **Explanation** sentences. We will release the full version of PlainFact soon (including Category and Relation information). Stay tuned!
Here are explanations for the headings:
- **Target_Sentence_factual**: The all factual plain language sentence.
- **Target_Sentence_non_factual**: The perturbed (non-factual) plain language sentence.
- **External**: Whether the sentence includes information does not explicitly present in the scientific abstract. (yes: explanation, no: simplification)
- **Original_Abstract**: The scientific abstract corresponding to each sentence/summary.
You can load our dataset as follows:
```python
from datasets import load_dataset
plainfact = load_dataset("uzw/PlainFact")
```
For detailed information regarding the dataset or factuality evaluation framework, please refer to our [Github repo](https://github.com/zhiwenyou103/PlainQAFact) and paper at https://huggingface.co/papers/2503.08890.
Citation
If you use data from PlainFact or PlainFact-summary, please cite with the following BibTex entry:
```
@misc{you2025plainqafactautomaticfactualityevaluation,
title={PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain Language Summaries Generation},
author={Zhiwen You and Yue Guo},
year={2025},
eprint={2503.08890},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.08890},
}
``` |