The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
KIE-HVQA
Seeing is Believing? Mitigating OCR Hallucinations in Multimodal Large Language Models
Data for the paper Seeing is Believing? Mitigating OCR Hallucinations in Multimodal Large Language Models
Introduction
Recent advancements in multimodal large language models have significantly improved document understanding by integrating textual and visual information. However, in real-world scenarios—especially under conditions of visual degradation—existing models often fall short. They struggle to accurately perceive and handle visual ambiguities, leading to an overreliance on linguistic priors and misaligned visual-textual reasoning. This challenge in recognizing uncertainty frequently results in hallucinated content, particularly when providing precise answers is infeasible.
To better illustrate and address this critical problem, we propose KIE-HVQA, the first dedicated benchmark designed to evaluate OCR hallucination in degraded document understanding. KIE-HVQA encompasses test samples from identity cards and invoices, enhanced with simulated real-world degradations that compromise OCR reliability. This benchmark uniquely facilitates the assessment of models’ abilities to discern reliable visual information under degraded conditions and respond appropriately, thereby emphasizing the challenge of avoiding hallucination when faced with uncertain data.
Main Results
We assessed several recent state-of-the-art models, including both open-source and proprietary ones.
Usage
Evaluation for MLLMs results
- Run the command to evaluate the LLMs results.
cd KIE-HVQA python3 eval.py
Infer demo
- Run the command to infer the pred.
cd KIE-HVQA python3 infer_qwen.py
License
The source code is licensed under the Apache License 2.0.
The dataset is licensed under the CC BY 4.0 License.
Acknowledgement
The dataset is built upon OCRBench,[WildReceipt] (https://arxiv.org/abs/2103.14470)
Citation
If you find this project useful in your research, please cite:
@misc{he2025seeingbelievingmitigatingocr,
title={Seeing is Believing? Mitigating OCR Hallucinations in Multimodal Large Language Models},
author={Zhentao He and Can Zhang and Ziheng Wu and Zhenghao Chen and Yufei Zhan and Yifan Li and Zhao Zhang and Xian Wang and Minghui Qiu},
year={2025},
eprint={2506.20168},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.20168},
}
- Downloads last month
- 75