Deliberation on Priors: Trustworthy Reasoning of Large Language Models on Knowledge Graphs
Abstract
The Deliberation over Priors framework enhances the trustworthiness of LLMs by integrating structural and constraint priors from knowledge graphs through knowledge distillation and reasoning introspection.
Knowledge graph-based retrieval-augmented generation seeks to mitigate hallucinations in Large Language Models (LLMs) caused by insufficient or outdated knowledge. However, existing methods often fail to fully exploit the prior knowledge embedded in knowledge graphs (KGs), particularly their structural information and explicit or implicit constraints. The former can enhance the faithfulness of LLMs' reasoning, while the latter can improve the reliability of response generation. Motivated by these, we propose a trustworthy reasoning framework, termed Deliberation over Priors (DP), which sufficiently utilizes the priors contained in KGs. Specifically, DP adopts a progressive knowledge distillation strategy that integrates structural priors into LLMs through a combination of supervised fine-tuning and Kahneman-Tversky optimization, thereby improving the faithfulness of relation path generation. Furthermore, our framework employs a reasoning-introspection strategy, which guides LLMs to perform refined reasoning verification based on extracted constraint priors, ensuring the reliability of response generation. Extensive experiments on three benchmark datasets demonstrate that DP achieves new state-of-the-art performance, especially a Hit@1 improvement of 13% on the ComplexWebQuestions dataset, and generates highly trustworthy responses. We also conduct various analyses to verify its flexibility and practicality. The code is available at https://github.com/reml-group/Deliberation-on-Priors.
Community
We propose a trustworthy reasoning framework over KGs named DP (Deliberation on Priors). The framework comprises four key modules: Distillation, Planning, Instantiation, and Introspection, which guide LLMs to generate faithful and reliable responses through a two-stage process: offline and online
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Knowledge Graph-extended Retrieval Augmented Generation for Question Answering (2025)
- LightPROF: A Lightweight Reasoning Framework for Large Language Model on Knowledge Graph (2025)
- Enhancing Large Language Models with Reward-guided Tree Search for Knowledge Graph Question and Answering (2025)
- AlignRAG: Leveraging Critique Learning for Evidence-Sensitive Retrieval-Augmented Reasoning (2025)
- ReaRAG: Knowledge-guided Reasoning Enhances Factuality of Large Reasoning Models with Iterative Retrieval Augmented Generation (2025)
- Question-Aware Knowledge Graph Prompting for Enhancing Large Language Models (2025)
- CDF-RAG: Causal Dynamic Feedback for Adaptive Retrieval-Augmented Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper