--- language: - en license: cc-by-sa-3.0 size_categories: - 1K Currently, we only released the annotation for **Explanation** sentences. We will release the full version of PlainFact soon (including Category and Relation information). Stay tuned! Here are explanations for the headings: - **Target_Sentence_factual**: The all factual plain language sentence. - **Target_Sentence_non_factual**: The perturbed (non-factual) plain language sentence. - **External**: Whether the sentence includes information does not explicitly present in the scientific abstract. (yes: explanation, no: simplification) - **Original_Abstract**: The scientific abstract corresponding to each sentence/summary. You can load our dataset as follows: ```python from datasets import load_dataset plainfact = load_dataset("uzw/PlainFact") ``` For detailed information regarding the dataset or factuality evaluation framework, please refer to our [Github repo](https://github.com/zhiwenyou103/PlainQAFact) and paper at https://huggingface.co/papers/2503.08890. Citation If you use data from PlainFact or PlainFact-summary, please cite with the following BibTex entry: ``` @misc{you2025plainqafactautomaticfactualityevaluation, title={PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain Language Summaries Generation}, author={Zhiwen You and Yue Guo}, year={2025}, eprint={2503.08890}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2503.08890}, } ```