Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "wangrongsheng/DPDG-Qwen2-7B-lora" # [wangrongsheng/DPDG-Qwen2-7B-lora]
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

ins = """
请根据给定的提示生成两种不同质量的回答。第一种回答应该是高质量的、令人满意的答案,代表"chosen"的选项。第二种回答则应该是低质量的、不太理想的答案,代表"rejected"的选项。\n
在生成这两个回答时,请注意以下事项:\n
1. "chosen" 回复应具有实质性内容、流畅的表达,并能够完整回答提示中提出的问题或要求。\n
2. "rejected" 回复可能存在一些问题,例如逻辑不连贯、信息不完整或表达不清晰。但请确保它仍然是一个可以大致理解的回复,而不是完全无关或毫无意义的内容。\n
3. 这两个回复的长度应该大致相当,而不是差异极大。\n
4. 请确保在"chosen"回复和"rejected"回复之间反映出明显的质量差异,使区别显而易见。\n
请根据这些指导方针为给定的提示生成一个"chosen"的回应和一个"rejected"的回应。这将有助于训练奖励模型以区分高质量和低质量的回应。\n
提示是:
"""
prompt = "什么是ACI fabric中的叶脊拓扑结构?"
messages = [
    {"role": "system", "content": ins},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=1024
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)
Downloads last month
2
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.