Training Language Models to Generate Quality Code with Program Analysis Feedback
Abstract
A reinforcement learning framework improves code quality in large language models by using automated feedback from program analysis and unit tests.
Code generation with large language models (LLMs), often termed vibe coding, is increasingly adopted in production but fails to ensure code quality, particularly in security (e.g., SQL injection vulnerabilities) and maintainability (e.g., missing type annotations). Existing methods, such as supervised fine-tuning and rule-based post-processing, rely on labor-intensive annotations or brittle heuristics, limiting their scalability and effectiveness. We propose REAL, a reinforcement learning framework that incentivizes LLMs to generate production-quality code using program analysis-guided feedback. Specifically, REAL integrates two automated signals: (1) program analysis detecting security or maintainability defects and (2) unit tests ensuring functional correctness. Unlike prior work, our framework is prompt-agnostic and reference-free, enabling scalable supervision without manual intervention. Experiments across multiple datasets and model scales demonstrate that REAL outperforms state-of-the-art methods in simultaneous assessments of functionality and code quality. Our work bridges the gap between rapid prototyping and production-ready code, enabling LLMs to deliver both speed and quality.
Community
We introduce ๐๐๐๐, an RL framework that trains LLMs with automated program analysis feedback, enabling "vibe coding" to be not just fastโbut ๐ฏ๐ฎ๐ฅ๐ง๐๐ซ๐๐๐ข๐ฅ๐ข๐ญ๐ฒ-๐๐ซ๐๐ & ๐ฉ๐ซ๐จ๐๐ฎ๐๐ญ๐ข๐จ๐ง-๐ซ๐๐๐๐ฒ ๐ก๏ธ
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Improving LLM-Generated Code Quality with GRPO (2025)
- VeriCoder: Enhancing LLM-Based RTL Code Generation through Functional Correctness Validation (2025)
- VeriReason: Reinforcement Learning with Testbench Feedback for Reasoning-Enhanced Verilog Generation (2025)
- CodeV-R1: Reasoning-Enhanced Verilog Generation (2025)
- Give LLMs a Security Course: Securing Retrieval-Augmented Code Generation via Knowledge Injection (2025)
- VERINA: Benchmarking Verifiable Code Generation (2025)
- OSS-Bench: Benchmark Generator for Coding LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper