Quick Start

🤗 Hugging Face Transformers

Here we show a code snippet to show you how to use the chat model with transformers:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

model = AutoModelForCausalLM.from_pretrained("thanghf/demo_math_model",torch_dtype=torch.bfloat16,device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("thanghf/demo_math_model")
model.eval()
streamer = TextStreamer(tokenizer)
prompt = """Gieo hai con súc xắc cân đối và đồng chất. Xác suất để tổng số chấm trên mặt xuất hiện của hai con súc xắc bằng 7 là:"""
messages = [
    {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=4096,
    streamer=streamer
)
Downloads last month
3
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including thanghf/demo_math_model