QuestEval: Summarization Asks for Fact-based Evaluation
Paper
•
2103.12693
•
Published
This model is a Classifier model based on T5-small, that predicts if a answer / question couple is considered as important fact or not (Is this answer enough relevant to appear in a plausible summary?). It is actually a component of QuestEval metric but can be used independently as it is.
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ThomasNLG/t5-weighter_cnndm-en")
model = T5ForConditionalGeneration.from_pretrained("ThomasNLG/t5-weighter_cnndm-en")
You can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model):
text_input = "{ANSWER} </s> {QUESTION} </s> {CONTEXT}"
The model was trained on synthetic data as described in Questeval: Summarization asks for fact-based evaluation.
@article{scialom2021questeval,
title={Questeval: Summarization asks for fact-based evaluation},
author={Scialom, Thomas and Dray, Paul-Alexis and Gallinari, Patrick and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo and Wang, Alex},
journal={arXiv preprint arXiv:2103.12693},
year={2021}
}