Model Card for Qwen2.5-Instruct-7B-COIG-P
This repository contains the Qwen2.5-Instruct-7B-COIG-P model, a 7B parameter Large Language Model fine-tuned for instruction following using the COIG-P dataset, as described in the paper COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values.
Model Details
- Developed by: [More Information Needed]
- Funded by: [More Information Needed]
- Shared by: [More Information Needed]
- Model type: Large Language Model (LLM)
- Language(s) (NLP): Chinese (zh)
- License: cc-by-nc-4.0
- Finetuned from model: Qwen2
Model Sources
- Repository: [More Information Needed]
- Paper: COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values
Uses
Direct Use
This model is designed for text generation tasks and is particularly well-suited for Chinese language processing. It can be used for generating creative text formats, translating languages, and answering questions.
Downstream Use
The model can be fine-tuned for various downstream tasks, including chatbots, code generation, summarization, question answering, and other NLP tasks. The Llama-Factory can be used for fine-tuning the model.
Out-of-Scope Use
The model's performance may be limited when applied to tasks significantly different from those it was trained on or tasks requiring understanding of languages other than Chinese.
Bias, Risks, and Limitations
The model may exhibit biases present in its training data, particularly reflecting biases inherent in the Chinese language and culture. Users should be aware of potential biases and limitations and use the model responsibly and ethically, avoiding applications that could perpetuate or amplify harmful biases.
How to Get Started with the Model
Use the following code to get started with the Qwen2.5-Instruct-7B-COIG-P model:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = "cuda" # or "cpu" if you don't have a GPU
model = AutoModelForCausalLM.from_pretrained(
"m-a-p/Qwen2.5-Instruct-7B-COIG-P",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("m-a-p/Qwen2.5-Instruct-7B-COIG-P")
prompt = "给我一个大型语言模型的简短介绍。" # Give me a short introduction to large language model.
messages = [
{"role": "system", "content": "你是一个乐于助人的助手。"}, # You are a helpful assistant.
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Training Details
Training Data
The model was trained on the COIG-P dataset (https://huggingface.co/datasets/m-a-p/COIG-P). This dataset consists of 101k Chinese preference pairs across six domains: Chat, Code, Math, Logic, Novel, and Role.
Training Procedure
The model was trained using the Llama-Factory.
Training Hyperparameters
- Training regime: [More Information Needed]
Speeds, Sizes, Times
- Checkpoint size: [More Information Needed]
- Training time: [More Information Needed]
Evaluation
Testing Data, Factors & Metrics
The model's performance is evaluated using the Chinese Reward Benchmark (CRBench) and AlignBench.
Testing Data
- Chinese Reward Benchmark (CRBench): https://huggingface.co/datasets/m-a-p/COIG-P-CRM
- AlignBench: https://github.com/THUDM/AlignBench
Factors
[Add factors from paper, e.g., domain, task type]
Metrics
[Add metrics from paper, e.g., accuracy, precision, recall]
Results
[Add results from paper, including tables and figures if appropriate]
Summary
[Summarize evaluation results from the paper]
Citation
BibTeX:
@misc{pteam2025coigphighqualitylargescalechinese,
title={COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values},
author={P Team and Siwei Wu and Jincheng Ren and Xinrun Du and Shuyue Guo and Xingwei Qu and Yiming Liang and Jie Liu and Yunwen Li and Tianyu Zheng and Boyu Feng and Huaqing Yuan and Zenith Wang and Jiaheng Liu and Wenhao Huang and Chenglin Cai and Haoran Que and Jian Yang and Yuelin Bai and Zekun Moore Wang and Zhouliang Yu and Qunshu Lin and Ding Pan and Yuchen Jiang and Tiannan Wang and Wangchunshu Zhou and Shenzhi Wang and Xingyuan Bu and Minghao Liu and Guoyin Wang and Ge Zhang and Chenghua Lin},
year={2025},
eprint={2504.05535},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.05535},
}
APA: [Add APA citation here based on BibTeX]
- Downloads last month
- 11