CodeReviewQA: The Code Review Comprehension Assessment for Large Language Models
The task of automated code refinement aims to automate the developer's perspective in resolving an actionable code review comment provided by a reviewer. This is a generative task, where the LLM is required to revise a pre-review code submission with respect to the natural language code review comment to produce an intended post-review code revision. CodeReviewQA further breaks down this generative task into three intermediate reasoning steps (represented as MCQA problems) to provide early signals for model development.

The benchmark features 900 manually curated, high-quality examples across nine programming languages (100 examples each). Each example represents a real interaction between a human reviewer and developer in a collaborative code review scenario. Different from clear instruction-esque prompts, code review comments are often underspecified, ambiguous, and implicit. Thus, this problem assesses LLMs' proficiency in understanding and following conversational instructions in human-oriented software development. For more details, please visit our paper linked below.

Dataset Details
- Paper: https://arxiv.org/abs/2503.16167
- Point of Contact: [email protected]
- Repository: https://github.com/hongyi-tom/CodeReviewQA
(The repository contains inference scripts used in our experiments)
Tasks
Original Problem (Text-to-Text Generation)
- Automated Code Refinement (ACR): Given a pre-review code submission and code review comment, generate the post-review code revision that is being requested.
Intermediate Reasoning Steps (Multiple Choice Question Answering)
- Change Type Recognition (CTR): Given a pre-review code submission and code review comment, infer the general code change type that is being requested.
- Change Localisation (CL): Given a pre-review code submission and code review comment, locate the precise lines of code that need to be revised.
- Solution Identification (SI): Given a pre-review code submission and code review comment, identify the exact code revision that is being requested.
(Both Change Localisation and Solution Identification have easy (E) and hard (H) difficulty variations, where the hard version represents an adversarial setup.)
Included Languages
- Natural Language: English
- Programming Language: C, C++, CSharp, Go, Java, JavaScript, PHP, Python, Ruby
Data Fields
General
old
(string): Pre-review code submission (hunk level granularity)new
(string): Post-review code revision (hunk level granularity)review
(string): Actionable natural language code review comment
Change Type Recognition
type_correct
(string): Ground truth change typetype_wrong
(list): Two incorrect change types
Change Localisation
loc_correct
(list): Ground truth set of changed linesloc_wrong_easy
(list): Three incorrect sets of changed lines (low jaccard similarity between answer sets)loc_wrong_hard
(list): Three incorrect sets of changed lines (high jaccard similarity between answer sets)
Solution Identification
solution_correct
(string): Ground truth post-review code revision w/ line nosolution_wrong_easy
(list): Three incorrect post-review code revisions w/ line no (low cosine similarity with ground truth)solution_wrong_hard
(list): Three incorrect post-review code revisions w/ line no (high cosine similarity with ground truth)
Additional Information
lang
(string): Programming language used in the code submission/revision
Authors
- Hong Yi Lin, The University of Melbourne
- Chunhua Liu, The University of Melbourne
- Haoyu Gao, The University of Melbourne
- Patanamon Thongtanunam, The University of Melbourne
- Christoph Treude, Singapore Management University
Data Source
The code review examples are mined from closed pull requests of open source GitHub projects. These examples were originally provided by the authors of the following paper.
Guo, Q., Cao, J., Xie, X., Liu, S., Li, X., Chen, B. and Peng, X., 2024, February. Exploring the potential of chatgpt in automated code refinement: An empirical study. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering (pp. 1-13).
Licensing Information
The CodeReviewQA benchmark is licensed under the MIT License.
Citation Information
@article{lin2025codereviewqa,
title={CodeReviewQA: The Code Review Comprehension Assessment for Large Language Models},
author={Lin, Hong Yi and Liu, Chunhua and Gao, Haoyu and Thongtanunam, Patanamon and Treude, Christoph},
journal={arXiv preprint arXiv:2503.16167},
year={2025}
}
- Downloads last month
- 125