license: apache-2.0
task_categories:
- text-generation
tags:
- code
FrontierCO: Benchmark Dataset for Frontier Combinatorial Optimization
Overview
FrontierCO is a curated benchmark suite for evaluating ML-based solvers on large-scale and real-world Combinatorial Optimization (CO) problems. The benchmark spans 8 classical CO problems across 5 application domains, providing both training and evaluation instances specifically designed to test the frontier of ML and LLM capabilities in solving NP-hard problems.
code for evaluating agent https://github.com/sunnweiwei/CO-Bench?tab=readme-ov-file#evaluation-on-frontierco
code for running classifical solver, generate training data, evaluating neural solver: https://github.com/sunnweiwei/FrontierCO
Evaluation Code: https://github.com/sunnweiwei/FrontierCO
Dataset Structure
Each subdirectory corresponds to a specific CO task:
FrontierCO/
βββ CFLP/
β βββ easy_test_instances/
β βββ hard_test_instances/
β βββ valid_instances/
β βββ config.py
βββ CPMP/
βββ CVRP/
βββ FJSP/
βββ MIS/
βββ MDS/
βββ STP/
βββ TSP/
βββ ...
Each task folder contains:
easy_test_instances/
: Benchmark instances that are solvable by SOTA human-designed solvers.hard_test_instances/
: Instances that remain computationally intensive or lack known optimal solutions.valid_instances/
(if applicable): Additional instances for validation or development.config.py
: Metadata about instance format, solver settings, and reference solutions.
Tasks Covered
The benchmark currently includes the following problems:
- MIS β Maximum Independent Set
- MDS β Minimum Dominating Set
- TSP β Traveling Salesman Problem
- CVRP β Capacitated Vehicle Routing Problem
- CFLP β Capacitated Facility Location Problem
- CPMP β Capacitated p-Median Problem
- FJSP β Flexible Job-shop Scheduling Problem
- STP β Steiner Tree Problem
Each task includes:
- Easy and hard test sets with varying difficulty and practical relevance
- Training and validation instances where applicable, generated using problem-specific generators
- Reference results for classical and ML-based solvers
Data Sources
Instances are sourced from a mix of:
- Public repositories (e.g., TSPLib, CVRPLib)
- DIMACS and PACE Challenges
- Synthetic instance generators used in prior ML and optimization research
- Manual curation from recent SOTA solver evaluation benchmarks
For tasks lacking open benchmarks, we include high-quality synthetic instances aligned with real-world difficulty distributions.
Usage
To use this dataset, clone the repository and select the task of interest. Each config.py
file documents the format and how to parse or evaluate the instances.
git clone https://huggingface.co/datasets/CO-Bench/FrontierCO
cd FrontierCO/CFLP
Load a data instance
from config import load_data
instance = load_data('easy_test_instances/i1000_1.plc')
print(instance)
Generate a solution
# Your solution generation code goes here.
# For example:
solution = my_solver_func(**instance)
Evaluate a solution
from config import eval_func
score = eval_func(**instance, **solution)
print("Evaluation score:", score)
Citation
If you use FrontierCO in your research or applications, please cite the following paper:
@misc{feng2025comprehensive,
title={A Comprehensive Evaluation of Contemporary ML-Based Solvers for Combinatorial Optimization},
author={Shengyu Feng and Weiwei Sun and Shanda Li and Ameet Talwalkar and Yiming Yang},
year={2025},
}
License
This dataset is released under the MIT License. Refer to LICENSE
file for details.