File size: 3,968 Bytes
8e62344 4d13354 1ead477 83799cd 71141e3 1ead477 68beaf6 1ead477 68beaf6 cf7de12 68beaf6 1ead477 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
---
license: apache-2.0
task_categories:
- text-generation
tags:
- code
---
# FrontierCO: Benchmark Dataset for Frontier Combinatorial Optimization
## Overview
**FrontierCO** is a curated benchmark suite for evaluating ML-based solvers on large-scale and real-world **Combinatorial Optimization (CO)** problems. The benchmark spans **8 classical CO problems** across **5 application domains**, providing both training and evaluation instances specifically designed to test the frontier of ML and LLM capabilities in solving NP-hard problems.
code for evaluating agent https://github.com/sunnweiwei/CO-Bench?tab=readme-ov-file#evaluation-on-frontierco
code for running classifical solver, generate training data, evaluating neural solver: https://github.com/sunnweiwei/FrontierCO
Evaluation Code: https://github.com/sunnweiwei/FrontierCO
---
## Dataset Structure
Each subdirectory corresponds to a specific CO task:
```
FrontierCO/
βββ CFLP/
β βββ easy_test_instances/
β βββ hard_test_instances/
β βββ valid_instances/
β βββ config.py
βββ CPMP/
βββ CVRP/
βββ FJSP/
βββ MIS/
βββ MDS/
βββ STP/
βββ TSP/
βββ ...
```
Each task folder contains:
* `easy_test_instances/`: Benchmark instances that are solvable by SOTA human-designed solvers.
* `hard_test_instances/`: Instances that remain computationally intensive or lack known optimal solutions.
* `valid_instances/` *(if applicable)*: Additional instances for validation or development.
* `config.py`: Metadata about instance format, solver settings, and reference solutions.
---
## Tasks Covered
The benchmark currently includes the following problems:
* **MIS** β Maximum Independent Set
* **MDS** β Minimum Dominating Set
* **TSP** β Traveling Salesman Problem
* **CVRP** β Capacitated Vehicle Routing Problem
* **CFLP** β Capacitated Facility Location Problem
* **CPMP** β Capacitated p-Median Problem
* **FJSP** β Flexible Job-shop Scheduling Problem
* **STP** β Steiner Tree Problem
Each task includes:
* Easy and hard test sets with varying difficulty and practical relevance
* Training and validation instances where applicable, generated using problem-specific generators
* Reference results for classical and ML-based solvers
---
## Data Sources
Instances are sourced from a mix of:
* Public repositories (e.g., [TSPLib](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/), [CVRPLib](http://vrp.galgos.inf.puc-rio.br/))
* DIMACS and PACE Challenges
* Synthetic instance generators used in prior ML and optimization research
* Manual curation from recent SOTA solver evaluation benchmarks
For tasks lacking open benchmarks, we include high-quality synthetic instances aligned with real-world difficulty distributions.
---
## Usage
To use this dataset, clone the repository and select the task of interest. Each `config.py` file documents the format and how to parse or evaluate the instances.
```bash
git clone https://huggingface.co/datasets/CO-Bench/FrontierCO
cd FrontierCO/CFLP
```
Load a data instance
```python
from config import load_data
instance = load_data('easy_test_instances/i1000_1.plc')
print(instance)
```
Generate a solution
```python
# Your solution generation code goes here.
# For example:
solution = my_solver_func(**instance)
```
### Evaluate a solution
```python
from config import eval_func
score = eval_func(**instance, **solution)
print("Evaluation score:", score)
```
---
## Citation
If you use **FrontierCO** in your research or applications, please cite the following paper:
```bibtex
@misc{feng2025comprehensive,
title={A Comprehensive Evaluation of Contemporary ML-Based Solvers for Combinatorial Optimization},
author={Shengyu Feng and Weiwei Sun and Shanda Li and Ameet Talwalkar and Yiming Yang},
year={2025},
}
```
---
## License
This dataset is released under the MIT License. Refer to `LICENSE` file for details.
---
|