Will It Still Be True Tomorrow? Multilingual Evergreen Question Classification to Improve Trustworthy QA
Abstract
EverGreenQA, a multilingual QA dataset with evergreen labels, is introduced to benchmark LLMs on temporality encoding and assess their performance through verbalized judgments and uncertainty signals.
Large Language Models (LLMs) often hallucinate in question answering (QA) tasks. A key yet underexplored factor contributing to this is the temporality of questions -- whether they are evergreen (answers remain stable over time) or mutable (answers change). In this work, we introduce EverGreenQA, the first multilingual QA dataset with evergreen labels, supporting both evaluation and training. Using EverGreenQA, we benchmark 12 modern LLMs to assess whether they encode question temporality explicitly (via verbalized judgments) or implicitly (via uncertainty signals). We also train EG-E5, a lightweight multilingual classifier that achieves SoTA performance on this task. Finally, we demonstrate the practical utility of evergreen classification across three applications: improving self-knowledge estimation, filtering QA datasets, and explaining GPT-4o retrieval behavior.
Community
Large Language Models (LLMs) often hallucinate in question answering (QA) tasks. A key yet underexplored factor contributing to this is the temporality of questions -- whether they are evergreen (answers remain stable over time) or mutable (answers change). In this work, we introduce EverGreenQA, the first multilingual QA dataset with evergreen labels, supporting both evaluation and training. Using EverGreenQA, we benchmark 12 modern LLMs to assess whether they encode question temporality explicitly (via verbalized judgments) or implicitly (via uncertainty signals). We also train EG-E5, a lightweight multilingual classifier that achieves SoTA performance on this task. Finally, we demonstrate the practical utility of evergreen classification across three applications: improving self-knowledge estimation, filtering QA datasets, and explaining GPT-4o retrieval behavior.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Faithfulness-Aware Uncertainty Quantification for Fact-Checking the Output of Retrieval Augmented Generation (2025)
- Can LLMs reason over extended multilingual contexts? Towards long-context evaluation beyond retrieval and haystacks (2025)
- Bidirectional LMs are Better Knowledge Memorizers? A Benchmark for Real-world Knowledge Injection (2025)
- ReSCORE: Label-free Iterative Retriever Training for Multi-hop Question Answering with Relevance-Consistency Supervision (2025)
- Divide-Then-Align: Honest Alignment based on the Knowledge Boundary of RAG (2025)
- XRAG: Cross-lingual Retrieval-Augmented Generation (2025)
- IndicSQuAD: A Comprehensive Multilingual Question Answering Dataset for Indic Languages (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper