lbourdois commited on
Commit
30a4d57
·
verified ·
1 Parent(s): f7a476c

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +38 -26
README.md CHANGED
@@ -1,26 +1,38 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-7B-Instruct
3
- language:
4
- - en
5
- library_name: transformers
6
- license: apache-2.0
7
- license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
8
- pipeline_tag: text-generation
9
- tags:
10
- - chat
11
- - openvino
12
- - nncf
13
- - fp16
14
- ---
15
-
16
- This model is a quantized version of [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).
17
- First make sure you have `optimum-intel` installed:
18
- ```bash
19
- pip install optimum[openvino]
20
- ```
21
- To load your model you can do as follows:
22
- ```python
23
- from optimum.intel import OVModelForCausalLM
24
- model_id = "AIFunOver/Qwen2.5-7B-Instruct-openvino-fp16"
25
- model = OVModelForCausalLM.from_pretrained(model_id)
26
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-7B-Instruct
3
+ language:
4
+ - zho
5
+ - eng
6
+ - fra
7
+ - spa
8
+ - por
9
+ - deu
10
+ - ita
11
+ - rus
12
+ - jpn
13
+ - kor
14
+ - vie
15
+ - tha
16
+ - ara
17
+ library_name: transformers
18
+ license: apache-2.0
19
+ license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
20
+ pipeline_tag: text-generation
21
+ tags:
22
+ - chat
23
+ - openvino
24
+ - nncf
25
+ - fp16
26
+ ---
27
+
28
+ This model is a quantized version of [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).
29
+ First make sure you have `optimum-intel` installed:
30
+ ```bash
31
+ pip install optimum[openvino]
32
+ ```
33
+ To load your model you can do as follows:
34
+ ```python
35
+ from optimum.intel import OVModelForCausalLM
36
+ model_id = "AIFunOver/Qwen2.5-7B-Instruct-openvino-fp16"
37
+ model = OVModelForCausalLM.from_pretrained(model_id)
38
+ ```