File size: 12,441 Bytes
11ca438 f38ad3d 939c0b0 f38ad3d 939c0b0 627b9e9 939c0b0 11ca438 cdb7a31 a7bb628 7d5e958 c6246f9 7d5e958 12562be 7d5e958 12562be 7d5e958 12562be 7d5e958 aacfd05 a29a768 926d6d3 c6246f9 e45d601 926d6d3 7d5e958 c6246f9 7d5e958 c6246f9 7d5e958 49ef0db cdb7a31 49ef0db c6246f9 d85376f fa9082a e60e06a fa9082a 800c265 cdb7a31 a7bb628 c6246f9 a7bb628 960296e 39b90e8 a7bb628 265080b a7bb628 39b90e8 a7bb628 a204b4f a7bb628 a204b4f 49ef0db a7bb628 531cbdd a7bb628 be21cdb c6246f9 a7bb628 a2ac3b7 c6246f9 bf1e484 a7bb628 8b3ab58 96c9f92 8b3ab58 a5c9018 8b3ab58 ab391d2 639f7ac 8b3ab58 96c9f92 8b3ab58 49ef0db 8b3ab58 49ef0db 8680f69 8b3ab58 f950d65 c8fa94e f950d65 cdb7a31 c6246f9 f950d65 a7bb628 69fb0e9 c8fa94e 69fb0e9 ae4c6ae 3d272da 3223698 c6246f9 3223698 626861a c6246f9 626861a 3d272da 36880bf d682ce6 c6246f9 a7bb628 d682ce6 c6246f9 bf1e484 a7bb628 d682ce6 a7bb628 3d272da a7bb628 626861a d682ce6 a7bb628 c6246f9 a7bb628 c6246f9 a7bb628 d682ce6 a7bb628 c6246f9 bf1e484 a7bb628 c6246f9 f38ad3d a643216 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 |
---
library_name: transformers
tags:
- torchao
- phi
- phi4
- nlp
- code
- math
- chat
- conversational
license: mit
language:
- multilingual
base_model:
- microsoft/Phi-4-mini-instruct
pipeline_tag: text-generation
---
[Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) model quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) float8 dynamic activation and float8 weight quantization (per row granularity), by PyTorch team. Use it directly, or serve using [vLLM](https://docs.vllm.ai/en/latest/) with 36% VRAM reduction, 15-20% speedup and little to no accuracy impact on H100.
# Inference with vLLM
Need to install vllm nightly to get some recent changes:
```
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
```
## Code Example
```Py
from vllm import LLM, SamplingParams
# Sample prompts.
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
if __name__ == '__main__':
# Create an LLM.
llm = LLM(model="pytorch/Phi-4-mini-instruct-float8dq")
# Generate texts from the prompts.
# The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
print("\nGenerated Outputs:\n" + "-" * 60)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}")
print(f"Output: {generated_text!r}")
print("-" * 60)
```
Note: please use `VLLM_DISABLE_COMPILE_CACHE=1` to disable compile cache when running this code, e.g. `VLLM_DISABLE_COMPILE_CACHE=1 python example.py`, since there are some issues with the composability of compile in vLLM and torchao,
this is expected be resolved in pytorch 2.8.
## Serving
Then we can serve with the following command:
```Shell
vllm serve pytorch/Phi-4-mini-instruct-float8dq --tokenizer microsoft/Phi-4-mini-instruct -O3
```
# Inference with Transformers
Install the required packages:
```Shell
pip install git+https://github.com/huggingface/transformers@main
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
pip install torch
pip install accelerate
```
Example:
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_path = "pytorch/Phi-4-mini-instruct-float8dq"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
# Quantization Recipe
Install the required packages:
```Shell
pip install git+https://github.com/huggingface/transformers@main
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
pip install torch
pip install accelerate
```
Use the following code to get the quantized model:
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
model_id = "microsoft/Phi-4-mini-instruct"
from torchao.quantization import Float8DynamicActivationFloat8WeightConfig, PerRow
quant_config = Float8DynamicActivationFloat8WeightConfig(granularity=PerRow())
quantization_config = TorchAoConfig(quant_type=quant_config)
quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Push to hub
USER_ID = "YOUR_USER_ID"
MODEL_NAME = model_id.split("/")[-1]
save_to = f"{USER_ID}/{MODEL_NAME}-float8dq"
quantized_model.push_to_hub(save_to, safe_serialization=False)
tokenizer.push_to_hub(save_to)
# Manual Testing
prompt = "Hey, are you conscious? Can you talk to me?"
messages = [
{
"role": "system",
"content": "",
},
{"role": "user", "content": prompt},
]
templated_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
print("Prompt:", prompt)
print("Templated prompt:", templated_prompt)
inputs = tokenizer(
templated_prompt,
return_tensors="pt",
).to("cuda")
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Response:", output_text[0][len(prompt):])
```
# Model Quality
We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model.
Need to install lm-eval from source:
https://github.com/EleutherAI/lm-evaluation-harness#install
## baseline
```Shell
lm_eval --model hf --model_args pretrained=microsoft/Phi-4-mini-instruct --tasks hellaswag --device cuda:0 --batch_size 8
```
## float8 dynamic activation and float8 weight quantization (float8dq)
```Shell
lm_eval --model hf --model_args pretrained=pytorch/Phi-4-mini-instruct-float8dq --tasks hellaswag --device cuda:0 --batch_size 8
```
| Benchmark | | |
|----------------------------------|----------------|---------------------|
| | Phi-4 mini-Ins | phi4-mini-float8dq |
| **Popular aggregated benchmark** | | |
| mmlu (0-shot) | 66.73 | Pending |
| mmlu_pro (5-shot) | 46.43 | Pending |
| **Reasoning** | | |
| arc_challenge (0-shot) | 56.91 | 56.66 |
| gpqa_main_zeroshot | 30.13 | 29.46 |
| HellaSwag | 54.57 | 54.55 |
| openbookqa | 33.00 | 33.60 |
| piqa (0-shot) | 77.64 | 77.48 |
| social_iqa | 49.59 | 49.28 |
| truthfulqa_mc2 (0-shot) | 48.39 | 48.09 |
| winogrande (0-shot) | 71.11 | 72.77 |
| **Multilingual** | | |
| mgsm_en_cot_en | 60.8 | 60.0 |
| **Math** | | |
| gsm8k (5-shot) | 81.88 | 80.89 |
| mathqa (0-shot) | 42.31 | 42.51 |
| **Overall** | **55.35** | **Pending** |
# Peak Memory Usage
## Results
| Benchmark | | |
|------------------|----------------|--------------------------------|
| | Phi-4 mini-Ins | Phi-4-mini-instruct-float8dq |
| Peak Memory (GB) | 8.91 | 5.70 (36% reduction) |
## Benchmark Peak Memory
We can use the following code to get a sense of peak memory usage during inference:
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
# use "microsoft/Phi-4-mini-instruct" or "pytorch/Phi-4-mini-instruct-float8dq"
model_id = "microsoft/Phi-4-mini-instruct"
quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
torch.cuda.reset_peak_memory_stats()
prompt = "Hey, are you conscious? Can you talk to me?"
messages = [
{
"role": "system",
"content": "",
},
{"role": "user", "content": prompt},
]
templated_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
print("Prompt:", prompt)
print("Templated prompt:", templated_prompt)
inputs = tokenizer(
templated_prompt,
return_tensors="pt",
).to("cuda")
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Response:", output_text[0][len(prompt):])
mem = torch.cuda.max_memory_reserved() / 1e9
print(f"Peak Memory Usage: {mem:.02f} GB")
```
# Model Performance
## Results (H100 machine)
| Benchmark | | |
|----------------------------------|----------------|--------------------------|
| | Phi-4 mini-Ins | phi4-mini-float8dq |
| latency (batch_size=1) | 1.64s | 1.41s (16% speedup) |
| latency (batch_size=128) | 3.1s | 2.72s (14% speedup) |
| serving (num_prompts=1) | 1.35 req/s | 1.57 req/s (16% speedup) |
| serving (num_prompts=1000) | 66.68 req/s | 80.53 req/s (21% speedup)|
Note the result of latency (benchmark_latency) is in seconds, and serving (benchmark_serving) is in number of requests per second.
## Setup
Need to install vllm nightly to get some recent changes
```Shell
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
```
Get vllm source code:
```Shell
git clone [email protected]:vllm-project/vllm.git
```
Run the benchmarks under `vllm` root folder:
## benchmark_latency
### baseline
```Shell
python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model microsoft/Phi-4-mini-instruct --batch-size 1
```
### float8dq
```Shell
python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model pytorch/Phi-4-mini-instruct-float8dq --batch-size 1
```
## benchmark_serving
We benchmarked the throughput in a serving environment.
Download sharegpt dataset: `wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json`
Other datasets can be found in: https://github.com/vllm-project/vllm/tree/main/benchmarks
### baseline
Server:
```Shell
vllm serve microsoft/Phi-4-mini-instruct --tokenizer microsoft/Phi-4-mini-instruct -O3
```
Client:
```Shell
python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model microsoft/Phi-4-mini-instruct --num-prompts 1
```
### float8dq
Server:
```Shell
vllm serve pytorch/Phi-4-mini-instruct-float8dq --tokenizer microsoft/Phi-4-mini-instruct -O3
```
Client:
```Shell
python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model jerryzh168/phi4-mini-float8dq --num-prompts 1
```
# Disclaimer
PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein. |