File size: 1,513 Bytes
39b05d9 416156e 39b05d9 a1952f8 39b05d9 a1952f8 39b05d9 a1952f8 39b05d9 a1952f8 39b05d9 ca25b88 39b05d9 ca25b88 d0cc3ad 5bd5773 d0cc3ad |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
license: mit
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
library_name: transformers
---
π SWE-Dev, an open-source Agent for Software Engineering tasks!
π‘ We develop a comprehensive pipeline for creating developer-oriented datasets from GitHub repositories, including issue tracking, code localization, test case generation, and evaluation.
π§ Based on open-source frameworks (OpenHands) and models, SWE-Dev-7B and 32B achieved solve rates of 23.4% and 36.6% on SWE-bench-Verified, respectively, even approaching the performance of GPT-4o.
π We find that training data scaling and inference scaling can both effectively boost the performance of models on SWE-bench. Moreover, higher data quality further improves this trend when combined with reinforcement fine-tuning (RFT). For inference scaling specifically, the solve rate on SWE-Dev increased from 34.0% at 30 rounds to 36.6% at 75 rounds.
SWE-Dev-32B is trained from [Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)**
Notion Link: https://ubecwang.notion.site/1bc32cf963e080b2a01df2895f66021f?v=1bc32cf963e0810ca07e000c86c4c1e1
GitHub Link: https://github.com/THUDM/SWE-Dev
Hugging Face Link:
- SWE-Dev-7B (Qwen-2.5-7B-Instruct): https://huggingface.co/THUDM/SWE-Dev-7B/
- SWE-Dev-9B (GLM-4-9B-Chat): https://huggingface.co/THUDM/SWE-Dev-9B/
- SWE-Dev-32B (Qwen-2.5-32B-Instruct): https://huggingface.co/THUDM/SWE-Dev-32B/
- SWE-Dev-train: https://huggingface.co/datasets/THUDM/SWE-Dev-train/ |