Beyond Multiple Choice: Verifiable OpenQA for Robust Vision-Language RFT
Abstract
ReVeL, a framework that converts multiple-choice questions to open-form questions, improves data efficiency and robustness in fine-tuning multimodal language models and reveals score inflation in MCQA benchmarks.
Multiple-choice question answering (MCQA) has been a popular format for evaluating and reinforcement fine-tuning (RFT) of modern multimodal language models. Its constrained output format allows for simplified, deterministic automatic verification. However, we find that the options may leak exploitable signals, which makes the accuracy metrics unreliable for indicating real capabilities and encourages explicit or implicit answer guessing behaviors during RFT. We propose ReVeL (Rewrite and Verify by LLM), a framework that rewrites multiple-choice questions into open-form questions while keeping answers verifiable whenever possible. The framework categorizes questions according to different answer types, apply different rewriting and verification schemes, respectively. When applied for RFT, we converted 20k MCQA examples and use GRPO to finetune Qwen2.5-VL models. Models trained on ReVeL-OpenQA match MCQA accuracy on multiple-choice benchmarks and improve OpenQA accuracy by about six percentage points, indicating better data efficiency and more robust reward signals than MCQA-based training. When used for evaluation, ReVeL also reveals up to 20 percentage points of score inflation in MCQA benchmarks (relative to OpenQA), improves judging accuracy, and reduces both cost and latency. We will release code and data publicly.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AutoRubric-R1V: Rubric-Based Generative Rewards for Faithful Multimodal Reasoning (2025)
- CLARity: Reasoning Consistency Alone Can Teach Reinforced Experts (2025)
- Beyond Monolithic Rewards: A Hybrid and Multi-Aspect Reward Optimization for MLLM Alignment (2025)
- Answer-Consistent Chain-of-thought Reinforcement Learning For Multi-modal Large Langauge Models (2025)
- Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be Dense (2025)
- Activating Visual Context and Commonsense Reasoning through Masked Prediction in VLMs (2025)
- ExpVid: A Benchmark for Experiment Video Understanding & Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
