FunJim commited on
Commit
3b3a625
·
verified ·
1 Parent(s): 4af7bdd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +226 -0
README.md CHANGED
@@ -1,4 +1,18 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: instance_id
@@ -82,3 +96,215 @@ configs:
82
  - split: test
83
  path: data/test-*
84
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ - question-answering
6
+ language:
7
+ - en
8
+ tags:
9
+ - code
10
+ - code-review
11
+ - software-engineering
12
+ - benchmark
13
+ - python
14
+ size_categories:
15
+ - n<1K
16
  dataset_info:
17
  features:
18
  - name: instance_id
 
96
  - split: test
97
  path: data/test-*
98
  ---
99
+
100
+ # SWE-CARE: A Comprehensiveness-aware Benchmark for Code Review Evaluation
101
+
102
+ <p align="center">
103
+ <a href="https://arxiv.org/pdf/2509.14856">
104
+ <img src="https://img.shields.io/badge/Tech Report-arXiv-red"></a>
105
+ <a href="https://huggingface.co/datasets/inclusionAI/SWE-CARE">
106
+ <img src="https://img.shields.io/badge/Dataset-HuggingFace-orange"></a>
107
+ <a href="https://github.com/inclusionAI/SWE-CARE">
108
+ <img src="https://img.shields.io/badge/Code-GitHub-blue"></a>
109
+ <a href="https://github.com/inclusionAI/SWE-CARE/blob/main/LICENSE">
110
+ <img src="https://img.shields.io/badge/License-Apache-blue"></a>
111
+ </p>
112
+
113
+ ## Dataset Description
114
+
115
+ SWE-CARE (Software Engineering - Comprehensive Analysis and Review Evaluation) is a comprehensiveness-aware benchmark for evaluating Large Language Models (LLMs) on repository-level code review tasks. The dataset features real-world code review scenarios from popular open-source Python and Java repositories, with comprehensive metadata and reference review comments.
116
+
117
+ ### Dataset Summary
118
+
119
+ - **Repository**: [inclusionAI/SWE-CARE](https://github.com/inclusionAI/SWE-CARE)
120
+ - **Paper**: [CodeFuse-CR-Bench: A Comprehensiveness-aware Benchmark for End-to-End Code Review Evaluation](https://arxiv.org/abs/2509.14856)
121
+ - **Languages**: Python
122
+ - **License**: Apache 2.0
123
+ - **Splits**:
124
+ - `test`: 671 instances (primary evaluation set)
125
+ - `dev`: 7,086 instances (development/training set)
126
+
127
+ ## Dataset Structure
128
+
129
+ ### Data Instances
130
+
131
+ Each instance in the dataset represents a code review task with the following structure:
132
+
133
+ ```json
134
+ {
135
+ "instance_id": "voxel51__fiftyone-2353@02e9ba1",
136
+ "repo": "voxel51/fiftyone",
137
+ "language": "Python",
138
+ "pull_number": 2353,
139
+ "title": "Fix issue with dataset loading",
140
+ "body": "This PR fixes...",
141
+ "created_at": "2023-01-15T10:30:00Z",
142
+ "problem_statement": "Issue #2350: Dataset fails to load...",
143
+ "hints_text": "Comments from the issue discussion...",
144
+ "resolved_issues": [
145
+ {
146
+ "number": 2350,
147
+ "title": "Dataset loading error",
148
+ "body": "When loading datasets..."
149
+ }
150
+ ],
151
+ "base_commit": "abc123...",
152
+ "commit_to_review": {
153
+ "head_commit": "def456...",
154
+ "head_commit_message": "Fix dataset loading logic",
155
+ "patch_to_review": "diff --git a/file.py..."
156
+ },
157
+ "reference_review_comments": [
158
+ {
159
+ "text": "Consider adding error handling here",
160
+ "path": "src/dataset.py",
161
+ "diff_hunk": "@@ -10,5 +10,7 @@...",
162
+ "line": 15,
163
+ "start_line": 14,
164
+ "original_line": 15,
165
+ "original_start_line": 14
166
+ }
167
+ ],
168
+ "merged_commit": "ghi789...",
169
+ "merged_patch": "diff --git a/file.py...",
170
+ "metadata": {
171
+ "problem_domain": "Bug Fixes",
172
+ "difficulty": "medium",
173
+ "estimated_review_effort": 3
174
+ }
175
+ }
176
+ ```
177
+
178
+ ### Data Fields
179
+
180
+ #### Core Fields
181
+
182
+ - `instance_id` (string): Unique identifier in format `repo_owner__repo_name-PR_number@commit_sha_short`
183
+ - `repo` (string): GitHub repository in format `owner/name`
184
+ - `language` (string): Primary programming language (`Python` or `Java`)
185
+ - `pull_number` (int): GitHub pull request number
186
+ - `title` (string): Pull request title
187
+ - `body` (string): Pull request description
188
+ - `created_at` (string): ISO 8601 timestamp of PR creation
189
+
190
+ #### Problem Context
191
+
192
+ - `problem_statement` (string): Combined title and body of resolved issue(s)
193
+ - `hints_text` (string): Relevant comments from issues prior to the PR
194
+ - `resolved_issues` (list): Array of resolved issues with:
195
+ - `number` (int): Issue number
196
+ - `title` (string): Issue title
197
+ - `body` (string): Issue description
198
+
199
+ #### Code Changes
200
+
201
+ - `base_commit` (string): Base commit SHA before changes
202
+ - `commit_to_review` (dict): The commit being reviewed:
203
+ - `head_commit` (string): Commit SHA to review
204
+ - `head_commit_message` (string): Commit message
205
+ - `patch_to_review` (string): Git diff of changes to review
206
+ - `merged_commit` (string): Final merged commit SHA
207
+ - `merged_patch` (string): Final merged changes (ground truth)
208
+
209
+ #### Reference Reviews
210
+
211
+ - `reference_review_comments` (list): Human code review comments with:
212
+ - `text` (string): Review comment text
213
+ - `path` (string): File path being reviewed
214
+ - `diff_hunk` (string): Relevant code diff context
215
+ - `line` (int): Line number in new version
216
+ - `start_line` (int): Start line for multi-line comments
217
+ - `original_line` (int): Line number in original version
218
+ - `original_start_line` (int): Original start line
219
+
220
+ #### Metadata
221
+
222
+ - `metadata` (dict): LLM-classified attributes:
223
+ - `problem_domain` (string): Category like "Bug Fix", "Feature", "Refactoring", etc.
224
+ - `difficulty` (string): "Easy", "Medium", or "Hard"
225
+ - `estimated_review_effort` (int): Scale of 1-5 for review complexity
226
+
227
+ ### Data Splits
228
+
229
+ | Split | Instances | Description |
230
+ |-------|-----------|-------------|
231
+ | test | 671 | Primary evaluation set for benchmarking |
232
+ | dev | 7,086 | Development set for training/fine-tuning |
233
+
234
+ ## Usage
235
+
236
+ ### Loading the Dataset
237
+
238
+ ```python
239
+ from datasets import load_dataset
240
+
241
+ # Load the test split (default for evaluation)
242
+ dataset = load_dataset("inclusionAI/SWE-CARE", split="test")
243
+
244
+ # Load the dev split
245
+ dev_dataset = load_dataset("inclusionAI/SWE-CARE", split="dev")
246
+
247
+ # Load both splits
248
+ full_dataset = load_dataset("inclusionAI/SWE-CARE")
249
+ ```
250
+
251
+ ### Using with SWE-CARE Evaluation Framework
252
+
253
+ ```python
254
+ from swe_care.utils.load import load_code_review_dataset
255
+
256
+ # Load from Hugging Face (default)
257
+ instances = load_code_review_dataset()
258
+
259
+ # Access instance data
260
+ for instance in instances:
261
+ print(f"Instance: {instance.instance_id}")
262
+ print(f"Repository: {instance.repo}")
263
+ print(f"Problem: {instance.problem_statement}")
264
+ print(f"Patch to review: {instance.commit_to_review.patch_to_review}")
265
+ print(f"Reference comments: {len(instance.reference_review_comments)}")
266
+ ```
267
+
268
+ ### Running Evaluation
269
+
270
+ See the [GitHub repository](https://github.com/inclusionAI/SWE-CARE) for detailed documentation and examples.
271
+
272
+ ### Evaluation Metrics and Baselines Results
273
+
274
+ See the [paper](https://arxiv.org/abs/2509.14856) for comprehensive evaluation metrics and baseline results on various LLMs.
275
+
276
+ ## Additional Information
277
+
278
+ ### Citation
279
+
280
+ If you use this dataset in your research, please cite:
281
+
282
+ ```bibtex
283
+ @misc{guo2025codefusecrbenchcomprehensivenessawarebenchmarkendtoend,
284
+ title={CodeFuse-CR-Bench: A Comprehensiveness-aware Benchmark for End-to-End Code Review Evaluation in Python Projects},
285
+ author={Hanyang Guo and Xunjin Zheng and Zihan Liao and Hang Yu and Peng DI and Ziyin Zhang and Hong-Ning Dai},
286
+ year={2025},
287
+ eprint={2509.14856},
288
+ archivePrefix={arXiv},
289
+ primaryClass={cs.SE},
290
+ url={https://arxiv.org/abs/2509.14856},
291
+ }
292
+ ```
293
+
294
+ ### Contributions
295
+
296
+ We welcome contributions! Please see our [GitHub repository](https://github.com/inclusionAI/SWE-CARE) for:
297
+
298
+ - Data collection improvements
299
+ - New evaluation metrics
300
+ - Baseline model results
301
+ - Bug reports and feature requests
302
+
303
+ ### License
304
+
305
+ This dataset is released under the Apache 2.0 License. See [LICENSE](https://github.com/inclusionAI/SWE-CARE/blob/main/LICENSE) for details.
306
+
307
+ ### Changelog
308
+
309
+ - **v0.2.0** (2025-10): Expanded dataset to 671 test instances
310
+ - **v0.1.0** (2025-09): Initial release with 601 test instances and 7,086 dev instances