Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
base_model:
|
4 |
+
- Qwen/Qwen2.5-7B-Instruct-1M
|
5 |
+
---
|
6 |
+
|
7 |
+
### Model Card: Graph-R1 Series
|
8 |
+
|
9 |
+
This model card covers the Graph-R1 series of models, including the final released versions and variants used in ablation studies. All information is based on the provided research paper.
|
10 |
+
|
11 |
+
#### **Model Details**
|
12 |
+
|
13 |
+
* **Model Developer**: HKUST-DSAIL
|
14 |
+
* **Model Series**: Graph-R1
|
15 |
+
* **Model Variants**:
|
16 |
+
* **Graph-R1-7B**: Fine-tuned from Qwen2.5-7B-Instruct-1M.
|
17 |
+
* **Graph-R1-1.5B**: Fine-tuned from Qwen2.5-1.5B.
|
18 |
+
* **Ablation Models**: Multiple variants based on different training configurations (e.g., data volume, training stages, reward functions, curriculum learning strategies).
|
19 |
+
* **Model Type**: Small reasoning language model, specialized in solving complex NP graph-theoretic problems.
|
20 |
+
* **Architecture**:
|
21 |
+
* **Base Model**: Qwen2.5
|
22 |
+
* **Training Framework**:
|
23 |
+
1. **Cold-start Supervised Fine-Tuning (SFT)**: Fine-tuned using long Chain-of-Thought (Long-CoT) data extracted from the QwQ-32B model to inject graph reasoning knowledge.
|
24 |
+
2. **Reasoning Optimization via Reinforcement Learning (RL)**: Employs a Group Relative Policy Optimization (GRPO)-based RL framework, combined with a curriculum learning strategy.
|
25 |
+
* **Model Date**: 2025/04
|
26 |
+
|
27 |
+
#### **Intended Use**
|
28 |
+
|
29 |
+
* **Primary Use Cases**:
|
30 |
+
* Solving complex graph-theoretic computational problems at the NP-Complete level, such as the Traveling Salesman Problem (TSP), Graph Edit Distance (GED), and Maximum Clique Problem (MCP).
|
31 |
+
* Serving as a compact, resource-efficient reasoning model for academic research and practical applications.
|
32 |
+
* **Potential Cross-Domain Applications**:
|
33 |
+
* The model demonstrates transferability to other complex reasoning tasks, including mathematics, programming, STEM, and logical reasoning.
|