Update README.md
Browse files
README.md
CHANGED
@@ -49,7 +49,7 @@ dataset_info:
|
|
49 |
---
|
50 |
|
51 |
# Dataset Summary
|
52 |
-
SWE-bench Extra is a dataset that can be used to train or evaluate agentic systems specializing in resolving GitHub issues. It is based on the methodology used to build SWE-bench benchmark and includes 6,
|
53 |
|
54 |
# Dataset Description
|
55 |
The SWE-bench Extra dataset supports the development of software engineering agents capable of autonomously solving GitHub issues. The data collection process, based on the SWE-bench methodology, involves the following steps:
|
@@ -58,7 +58,7 @@ The SWE-bench Extra dataset supports the development of software engineering age
|
|
58 |
2. **Filtering**: Instances are filtered based on attributes such as issue descriptions, relevant code paths, and test patches.
|
59 |
3. **Execution-based Validation**: The project environments are set up and tests are run to verify that they execute correctly.
|
60 |
|
61 |
-
For a more detailed description of the data collection process, please refer to our blog post
|
62 |
|
63 |
As an example use case of this dataset, we’ve used SWE-bench-extra instances to generate a dataset of 80,036 trajectories [`nebius/swe-agent-trajectories`](https://huggingface.co/datasets/nebius/swe-agent-trajectories). We’ve then trained an action generator model, that achieves a score of 19.2% on the subset of 50 random instances from the SWE-bench Verified benchmark, representing a 30% relative improvement over its parent model Qwen2.5-72B-Instruct, which scored 14.8%. Further augmenting the action generator with a guided search based on a critic model, also trained on this data, achieves 40.6% on the full SWE-bench Verified benchmark, which is state-of-the-art among agents using solely open-weight models. You can read more about this agent in our blog post, [“Leveraging Training and Search for Better Software Engineering Agents”](https://nebius.com/blog/posts/training-and-search-for-software-engineering-agents).
|
64 |
|
|
|
49 |
---
|
50 |
|
51 |
# Dataset Summary
|
52 |
+
SWE-bench Extra is a dataset that can be used to train or evaluate agentic systems specializing in resolving GitHub issues. It is based on the methodology used to build SWE-bench benchmark and includes 6,415 Issue-Pull Request pairs sourced from 1,988 Python repositories.
|
53 |
|
54 |
# Dataset Description
|
55 |
The SWE-bench Extra dataset supports the development of software engineering agents capable of autonomously solving GitHub issues. The data collection process, based on the SWE-bench methodology, involves the following steps:
|
|
|
58 |
2. **Filtering**: Instances are filtered based on attributes such as issue descriptions, relevant code paths, and test patches.
|
59 |
3. **Execution-based Validation**: The project environments are set up and tests are run to verify that they execute correctly.
|
60 |
|
61 |
+
For a more detailed description of the data collection process, please refer to our blog post [Scaling data collection for training software engineering agents](https://nebius.com/blog/posts/scaling-data-collection-for-training-swe-agents).
|
62 |
|
63 |
As an example use case of this dataset, we’ve used SWE-bench-extra instances to generate a dataset of 80,036 trajectories [`nebius/swe-agent-trajectories`](https://huggingface.co/datasets/nebius/swe-agent-trajectories). We’ve then trained an action generator model, that achieves a score of 19.2% on the subset of 50 random instances from the SWE-bench Verified benchmark, representing a 30% relative improvement over its parent model Qwen2.5-72B-Instruct, which scored 14.8%. Further augmenting the action generator with a guided search based on a critic model, also trained on this data, achieves 40.6% on the full SWE-bench Verified benchmark, which is state-of-the-art among agents using solely open-weight models. You can read more about this agent in our blog post, [“Leveraging Training and Search for Better Software Engineering Agents”](https://nebius.com/blog/posts/training-and-search-for-software-engineering-agents).
|
64 |
|