Dataset Viewer
Auto-converted to Parquet
text
stringlengths
3
370
100 100
136 1166
116 1055
115 1126
32 581
55 801
157 1311
174 1359
157 1301
78 826
183 1322
81 929
158 1267
132 1220
11 374
163 1268
117 1068
142 1237
75 913
146 1298
166 1272
78 954
61 758
81 895
65 768
184 1440
103 1017
148 1200
181 1313
33 565
180 1324
65 792
67 783
26 547
177 1444
11 349
146 1206
73 855
148 1273
13 379
178 1321
24 579
96 1008
29 569
129 1090
146 1134
19 511
152 1202
117 1162
115 1073
169 1270
80 922
43 693
128 1113
58 763
179 1329
174 1318
64 819
51 729
166 1212
106 1100
144 1249
65 886
22 502
75 925
45 719
137 1220
16 443
149 1239
48 714
148 1180
86 989
180 1351
23 481
109 1121
40 664
83 926
110 1095
79 886
41 662
124 1057
132 1119
125 1119
160 1256
184 1292
74 877
17 471
121 1143
165 1325
129 1056
13 438
37 615
72 908
107 1029
179 1296
109 1121
101 1104
138 1252
11 351
69 861
End of preview. Expand in Data Studio

FrontierCO: Benchmark Dataset for Frontier Combinatorial Optimization

Overview

FrontierCO is a curated benchmark suite for evaluating ML-based solvers on large-scale and real-world Combinatorial Optimization (CO) problems. The benchmark spans 8 classical CO problems across 5 application domains, providing both training and evaluation instances specifically designed to test the frontier of ML and LLM capabilities in solving NP-hard problems.

code for evaluating agent https://github.com/sunnweiwei/CO-Bench?tab=readme-ov-file#evaluation-on-frontierco

code for running classifical solver, generate training data, evaluating neural solver: https://github.com/sunnweiwei/FrontierCO


Dataset Structure

Each subdirectory corresponds to a specific CO task:

FrontierCO/
β”œβ”€β”€ CFLP/
β”‚   β”œβ”€β”€ easy_test_instances/
β”‚   β”œβ”€β”€ hard_test_instances/
β”‚   β”œβ”€β”€ valid_instances/
β”‚   └── config.py
β”œβ”€β”€ CPMP/
β”œβ”€β”€ CVRP/
β”œβ”€β”€ FJSP/
β”œβ”€β”€ MIS/
β”œβ”€β”€ MDS/
β”œβ”€β”€ STP/
β”œβ”€β”€ TSP/
└── ...

Each task folder contains:

  • easy_test_instances/: Benchmark instances that are solvable by SOTA human-designed solvers.
  • hard_test_instances/: Instances that remain computationally intensive or lack known optimal solutions.
  • valid_instances/ (if applicable): Additional instances for validation or development.
  • config.py: Metadata about instance format, solver settings, and reference solutions.

Tasks Covered

The benchmark currently includes the following problems:

  • MIS – Maximum Independent Set
  • MDS – Minimum Dominating Set
  • TSP – Traveling Salesman Problem
  • CVRP – Capacitated Vehicle Routing Problem
  • CFLP – Capacitated Facility Location Problem
  • CPMP – Capacitated p-Median Problem
  • FJSP – Flexible Job-shop Scheduling Problem
  • STP – Steiner Tree Problem

Each task includes:

  • Easy and hard test sets with varying difficulty and practical relevance
  • Training and validation instances where applicable, generated using problem-specific generators
  • Reference results for classical and ML-based solvers

Data Sources

Instances are sourced from a mix of:

  • Public repositories (e.g., TSPLib, CVRPLib)
  • DIMACS and PACE Challenges
  • Synthetic instance generators used in prior ML and optimization research
  • Manual curation from recent SOTA solver evaluation benchmarks

For tasks lacking open benchmarks, we include high-quality synthetic instances aligned with real-world difficulty distributions.


Usage

To use this dataset, clone the repository and select the task of interest. Each config.py file documents the format and how to parse or evaluate the instances.

git clone https://huggingface.co/datasets/CO-Bench/FrontierCO
cd FrontierCO/CFLP

Load a data instance

from config import load_data
instance = load_data('easy_test_instances/i1000_1.plc')
print(instance)

Generate a solution

# Your solution generation code goes here.
# For example:
solution = my_solver_func(**instance)

Evaluate a solution

from config import eval_func
score = eval_func(**instance, **solution)
print("Evaluation score:", score)

Citation

If you use FrontierCO in your research or applications, please cite the following paper:

@misc{feng2025comprehensive,
  title={A Comprehensive Evaluation of Contemporary ML-Based Solvers for Combinatorial Optimization},
  author={Shengyu Feng and Weiwei Sun and Shanda Li and Ameet Talwalkar and Yiming Yang},
  year={2025},
}

License

This dataset is released under the MIT License. Refer to LICENSE file for details.


Downloads last month
2,456