Files changed (1) hide show
  1. README.md +39 -27
README.md CHANGED
@@ -1,27 +1,39 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-1.5B-Instruct
3
- language:
4
- - en
5
- library_name: transformers
6
- license: apache-2.0
7
- license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
8
- pipeline_tag: text-generation
9
- tags:
10
- - chat
11
- - openvino
12
- - nncf
13
- - 8-bit
14
- base_model_relation: quantized
15
- ---
16
-
17
- This model is a quantized version of [`Qwen/Qwen2.5-1.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).
18
- First make sure you have `optimum-intel` installed:
19
- ```bash
20
- pip install optimum[openvino]
21
- ```
22
- To load your model you can do as follows:
23
- ```python
24
- from optimum.intel import OVModelForCausalLM
25
- model_id = "AIFunOver/Qwen2.5-1.5B-Instruct-openvino-8bit"
26
- model = OVModelForCausalLM.from_pretrained(model_id)
27
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
3
+ language:
4
+ - zho
5
+ - eng
6
+ - fra
7
+ - spa
8
+ - por
9
+ - deu
10
+ - ita
11
+ - rus
12
+ - jpn
13
+ - kor
14
+ - vie
15
+ - tha
16
+ - ara
17
+ library_name: transformers
18
+ license: apache-2.0
19
+ license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
20
+ pipeline_tag: text-generation
21
+ tags:
22
+ - chat
23
+ - openvino
24
+ - nncf
25
+ - 8-bit
26
+ base_model_relation: quantized
27
+ ---
28
+
29
+ This model is a quantized version of [`Qwen/Qwen2.5-1.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).
30
+ First make sure you have `optimum-intel` installed:
31
+ ```bash
32
+ pip install optimum[openvino]
33
+ ```
34
+ To load your model you can do as follows:
35
+ ```python
36
+ from optimum.intel import OVModelForCausalLM
37
+ model_id = "AIFunOver/Qwen2.5-1.5B-Instruct-openvino-8bit"
38
+ model = OVModelForCausalLM.from_pretrained(model_id)
39
+ ```