G1-7B
Introduction
G1 is the series of large language models trained on our benchmark Erdos for solving graph reasoning tasks, based on Qwen2.5-Instruct. We apply Group Relative Policy Optimization (GRPO) for reinforcement learning with supervised finetuning as a prelimary step.
G1 brings the following improvements:
- Significant improvement on graph reasoning: G1 models achieve up to 46% improvement over baselines on Erdős, with the 7B variant matching OpenAI’s o3-mini and the 3B model surpassing Qwen2.5-72B-Instruct by notable margins.
- Strong Generalization to unseen graph tasks: G1 exhibits zero-shot generalization on unseen graph tasks, improving performance on other graph reasoning benchmarks (GraphWiz, GraphArena) and real-world graphs (Cora, PubMed).
- NO Compromise on general reasoning: Crucially, G1 preserves general reasoning ability (GSM8K, MATH, MMLU-Pro), proving its versatility.
This repo contains the G1-7B model, which has the following features:
- Type: Causal Language Models
- Training Stage: SFT & RL
- Architecture: the same with Qwen2.5-Instruct
- Number of Parameters: 7.62B
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our paper and GitHub.
Requirements
The model is trained based on Qwen/Qwen2.5-3B-Instruct. The code of Qwen2.5 has been in the latest Hugging face transformers
and we advise you to use the latest version of transformers
.
With transformers<4.37.0
, you will encounter the following error:
KeyError: 'qwen2'
Quickstart
Here provides a code snippet with apply_chat_template
to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
INSTRUCTION_TEMPLATE = """
{instruction}
Solve the above problem efficiently and clearly. The last line of your response should be of the following format: 'Therefore, the final answer is: $\\boxed{{ANSWER}}$. I hope it is correct' (without quotes) where ANSWER is just the final number or expression that solves the problem. Think step by step before answering.
""".strip()
model_name = "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtsearch-assistant/guoxiaojun07/models/Qwen2.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "The task is to determine the degree centrality of a node in the graph.\n\n"\
"Degree centrality for a node is the fraction of nodes it is connected to.\n\n"\
"Here is an undirected graph containing nodes from 1 to 15. The edges are: (1, 15), (15, 11), (2, 3), (2, 6), (3, 6), (3, 7), (6, 7), (6, 8), (7, 8), (7, 14), (4, 10), (10, 5), (10, 12), (8, 14), (8, 9), (12, 11), (12, 13).\n\n"\
"Question: What is the degree centrality of node 2 in the graph?\n\n"\
"You need to format your answer as a float number."
messages = [
{"role": "user", "content": INSTRUCTION_TEMPLATE.format(instruction=prompt)}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096,
top_p=0.95,
top_k=30,
temperature=0.6
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Evaluation & Performance
Detailed evaluation results are reported in this 📑 paper.
Citation
If you find our work helpful, feel free to give us a cite.
@article{guo2025g1,
title={G1: Teaching LLMs to Reason on Graphs with Reinforcement Learning},
author={Guo, Xiaojun and Li, Ang and Wang, Yifei and Jegelka, Stefanie and Wang, Yisen},
journal={arXiv preprint arXiv:2505.18499},
year={2025}
}
- Downloads last month
- 15