File size: 993 Bytes
653faf2
 
 
 
 
 
 
 
 
dcf0b7b
d7dcec5
 
 
 
dcf0b7b
653faf2
 
 
dcf0b7b
653faf2
 
 
 
 
 
dcf0b7b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
---
license: apache-2.0
tags:
- openvino
- int4
---

This is an INT4 quantized version of the `mistralai/Mistral-7B-Instruct-v0.2` model. The Python packages used in creating this model are as follows:
```
openvino==2024.5.0rc1
optimum==1.23.3
optimum-intel==1.20.1
nncf==2.13.0
torch==2.5.1
transformers==4.46.2
```
This quantized model is created using the following command:
```
optimum-cli export openvino --model "mistralai/Mistral-7B-Instruct-v0.2" --weight-format int4 --group-size 128 --sym --ratio 1 --all-layers ./Mistral-7B-Instruct-v0.2-ov-int4
```
For more details, run the following command from your Python environment: `optimum-cli export openvino --help`

INFO:nncf:Statistics of the bitwidth distribution:
|   Num bits (N) | % all parameters (layers)   | % ratio-defining parameters (layers)   |
|----------------|-----------------------------|----------------------------------------|
|              4 | 100% (226 / 226)            | 100% (226 / 226)                       |