Text Generation
Transformers
Safetensors
phi3
conversational
custom_code
text-generation-inference
Inference Endpoints

AXCXEPT/EZO-phi-4-sft7_12000

Usage

Input Formats

Given the nature of the training data, phi-4 is best suited for prompts using the chat format as follows:

<|im_start|>system<|im_sep|>
You are a medieval knight and must provide explanations to modern people.<|im_end|>
<|im_start|>user<|im_sep|>
How should I explain the Internet?<|im_end|>
<|im_start|>assistant<|im_sep|>

With transformers

import transformers

pipeline = transformers.pipeline(
    "text-generation",
    model="microsoft/phi-4",
    model_kwargs={"torch_dtype": "auto"},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "あなたは優秀なAIです。丁寧な日本で、よく考えたうえで回答してください。"},
    {"role": "user", "content": "太郎くんはりんごを5つ持っています。彼はさらに2つのりんごの箱を買いました。1つの箱には3つのりんごが入っています。太郎くんは何個のりんごを持っていますか?"},
]

outputs = pipeline(messages, max_new_tokens=128)
print(outputs[0]["generated_text"][-1])
Downloads last month
12
Safetensors
Model size
14.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AXCXEPT/EZO-phi-4-sft7_12000

Base model

microsoft/phi-4
Finetuned
(33)
this model

Datasets used to train AXCXEPT/EZO-phi-4-sft7_12000

Collection including AXCXEPT/EZO-phi-4-sft7_12000