PlainFact / README.md
uzw's picture
add non-factual instances
f5b3324 verified
metadata
language:
  - en
license: cc-by-sa-3.0
size_categories:
  - 1K<n<10K
task_categories:
  - summarization
tags:
  - biomedical
  - health
  - NLP
  - summarization
  - LLM
  - factuality

PlainFact is a high-quality human-annotated dataset with fine-grained explanation (i.e., added information) annotations designed for Plain Language Summarization tasks, along with PlainQAFact factuality evaluation framework. It is collected from the Cochrane database sampled from CELLS dataset (Guo et al., 2024). PlainFact is a sentence-level benchmark that splits the summaries into sentences with fine-grained explanation annotations. In total, we have 200 plain language summary-abstract pairs (2,740 sentences). In addition to all factual plain language sentences, we also generate contrasting non-factual examples for each plain language sentence. These contrasting examples are perturbed using GPT-4o, following the perturbation criteria for faithfulness introduced in APPLS (Guo et al., 2024).

Currently, we only released the annotation for Explanation sentences. We will release the full version of PlainFact soon (including Category and Relation information). Stay tuned!

Here are explanations for the headings:

  • Target_Sentence_factual: The all factual plain language sentence.
  • Target_Sentence_non_factual: The perturbed (non-factual) plain language sentence.
  • External: Whether the sentence includes information does not explicitly present in the scientific abstract. (yes: explanation, no: simplification)
  • Original_Abstract: The scientific abstract corresponding to each sentence/summary.

You can load our dataset as follows:

from datasets import load_dataset
plainfact = load_dataset("uzw/PlainFact")

For detailed information regarding the dataset or factuality evaluation framework, please refer to our Github repo and paper at https://huggingface.co/papers/2503.08890.

Citation If you use data from PlainFact or PlainFact-summary, please cite with the following BibTex entry:

@misc{you2025plainqafactautomaticfactualityevaluation,
      title={PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain Language Summaries Generation}, 
      author={Zhiwen You and Yue Guo},
      year={2025},
      eprint={2503.08890},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2503.08890}, 
}