Update README.md
Browse files
README.md
CHANGED
@@ -17,12 +17,12 @@ base_model:
|
|
17 |
pipeline_tag: text-generation
|
18 |
---
|
19 |
|
20 |
-
[Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) model quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) float8 dynamic activation and float8 weight quantization (per row granularity), by PyTorch team.
|
21 |
|
22 |
|
23 |
# Quantization Recipe
|
24 |
|
25 |
-
|
26 |
|
27 |
```
|
28 |
pip install git+https://github.com/huggingface/transformers@main
|
@@ -31,7 +31,7 @@ pip install torch
|
|
31 |
pip install accelerate
|
32 |
```
|
33 |
|
34 |
-
|
35 |
|
36 |
```
|
37 |
import torch
|
@@ -131,7 +131,6 @@ lm_eval --model hf --model_args pretrained=pytorch/Phi-4-mini-instruct-float8dq
|
|
131 |
|
132 |
# Peak Memory Usage
|
133 |
|
134 |
-
We can use the following code to get a sense of peak memory usage during inference:
|
135 |
|
136 |
## Results
|
137 |
|
@@ -143,6 +142,9 @@ We can use the following code to get a sense of peak memory usage during inferen
|
|
143 |
|
144 |
## Benchmark Peak Memory
|
145 |
|
|
|
|
|
|
|
146 |
```
|
147 |
import torch
|
148 |
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
|
|
|
17 |
pipeline_tag: text-generation
|
18 |
---
|
19 |
|
20 |
+
[Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) model quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) float8 dynamic activation and float8 weight quantization (per row granularity), by PyTorch team. Use it directly, or serve using [vLLM](https://docs.vllm.ai/en/latest/) with 36% VRAM reduction, 15-20% speedup and little to no accuracy impact on H100.
|
21 |
|
22 |
|
23 |
# Quantization Recipe
|
24 |
|
25 |
+
Install the required packages:
|
26 |
|
27 |
```
|
28 |
pip install git+https://github.com/huggingface/transformers@main
|
|
|
31 |
pip install accelerate
|
32 |
```
|
33 |
|
34 |
+
Use the following code to get the quantized model:
|
35 |
|
36 |
```
|
37 |
import torch
|
|
|
131 |
|
132 |
# Peak Memory Usage
|
133 |
|
|
|
134 |
|
135 |
## Results
|
136 |
|
|
|
142 |
|
143 |
## Benchmark Peak Memory
|
144 |
|
145 |
+
We can use the following code to get a sense of peak memory usage during inference:
|
146 |
+
|
147 |
+
|
148 |
```
|
149 |
import torch
|
150 |
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
|