Update README.md
Browse files
README.md
CHANGED
@@ -12,8 +12,6 @@ tags:
|
|
12 |
|
13 |
**FrontierCO** is a curated benchmark suite for evaluating ML-based solvers on large-scale and real-world **Combinatorial Optimization (CO)** problems. The benchmark spans **8 classical CO problems** across **5 application domains**, providing both training and evaluation instances specifically designed to test the frontier of ML and LLM capabilities in solving NP-hard problems.
|
14 |
|
15 |
-
Combinatorial optimization plays a fundamental role in discrete mathematics, computer science, and operations research, with applications in routing, scheduling, allocation, and more. As ML-based solvers evolve—ranging from neural networks to symbolic reasoning with large language models—**FrontierCO** offers the first comprehensive dataset suite tailored to test these solvers at realistic scales and difficulties.
|
16 |
-
|
17 |
code for evaluating agent https://github.com/sunnweiwei/CO-Bench?tab=readme-ov-file#evaluation-on-frontierco
|
18 |
|
19 |
code for running classifical solver, generate training data, evaluating neural solver: https://github.com/sunnweiwei/FrontierCO
|
|
|
12 |
|
13 |
**FrontierCO** is a curated benchmark suite for evaluating ML-based solvers on large-scale and real-world **Combinatorial Optimization (CO)** problems. The benchmark spans **8 classical CO problems** across **5 application domains**, providing both training and evaluation instances specifically designed to test the frontier of ML and LLM capabilities in solving NP-hard problems.
|
14 |
|
|
|
|
|
15 |
code for evaluating agent https://github.com/sunnweiwei/CO-Bench?tab=readme-ov-file#evaluation-on-frontierco
|
16 |
|
17 |
code for running classifical solver, generate training data, evaluating neural solver: https://github.com/sunnweiwei/FrontierCO
|