Files changed (1) hide show
  1. README.md +81 -69
README.md CHANGED
@@ -1,70 +1,82 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-0.5B
3
- language:
4
- - en
5
- library_name: transformers
6
- license: apache-2.0
7
- license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE
8
- pipeline_tag: text-generation
9
- tags:
10
- - mlc-ai
11
- - MLC-Weight-Conversion
12
- ---
13
- ---
14
- library_name: mlc-llm
15
- base_model: Qwen/Qwen2.5-0.5B
16
- tags:
17
- - mlc-llm
18
- - web-llm
19
- ---
20
-
21
- # AMKCode/Qwen2.5-0.5B-q4f16_1-MLC
22
-
23
- This is the [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) model in MLC format `q4f16_1`.
24
- The conversion was done using the [MLC-Weight-Conversion](https://huggingface.co/spaces/mlc-ai/MLC-Weight-Conversion) space.
25
- The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
26
-
27
- ## Example Usage
28
-
29
- Here are some examples of using this model in MLC LLM.
30
- Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
31
-
32
- ### Chat
33
-
34
- In command line, run
35
- ```bash
36
- mlc_llm chat HF://mlc-ai/AMKCode/Qwen2.5-0.5B-q4f16_1-MLC
37
- ```
38
-
39
- ### REST Server
40
-
41
- In command line, run
42
- ```bash
43
- mlc_llm serve HF://mlc-ai/AMKCode/Qwen2.5-0.5B-q4f16_1-MLC
44
- ```
45
-
46
- ### Python API
47
-
48
- ```python
49
- from mlc_llm import MLCEngine
50
-
51
- # Create engine
52
- model = "HF://mlc-ai/AMKCode/Qwen2.5-0.5B-q4f16_1-MLC"
53
- engine = MLCEngine(model)
54
-
55
- # Run chat completion in OpenAI API.
56
- for response in engine.chat.completions.create(
57
- messages=[{"role": "user", "content": "What is the meaning of life?"}],
58
- model=model,
59
- stream=True,
60
- ):
61
- for choice in response.choices:
62
- print(choice.delta.content, end="", flush=True)
63
- print("\n")
64
-
65
- engine.terminate()
66
- ```
67
-
68
- ## Documentation
69
-
 
 
 
 
 
 
 
 
 
 
 
 
70
  For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-0.5B
3
+ language:
4
+ - zho
5
+ - eng
6
+ - fra
7
+ - spa
8
+ - por
9
+ - deu
10
+ - ita
11
+ - rus
12
+ - jpn
13
+ - kor
14
+ - vie
15
+ - tha
16
+ - ara
17
+ library_name: transformers
18
+ license: apache-2.0
19
+ license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE
20
+ pipeline_tag: text-generation
21
+ tags:
22
+ - mlc-ai
23
+ - MLC-Weight-Conversion
24
+ ---
25
+ ---
26
+ library_name: mlc-llm
27
+ base_model: Qwen/Qwen2.5-0.5B
28
+ tags:
29
+ - mlc-llm
30
+ - web-llm
31
+ ---
32
+
33
+ # AMKCode/Qwen2.5-0.5B-q4f16_1-MLC
34
+
35
+ This is the [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) model in MLC format `q4f16_1`.
36
+ The conversion was done using the [MLC-Weight-Conversion](https://huggingface.co/spaces/mlc-ai/MLC-Weight-Conversion) space.
37
+ The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
38
+
39
+ ## Example Usage
40
+
41
+ Here are some examples of using this model in MLC LLM.
42
+ Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
43
+
44
+ ### Chat
45
+
46
+ In command line, run
47
+ ```bash
48
+ mlc_llm chat HF://mlc-ai/AMKCode/Qwen2.5-0.5B-q4f16_1-MLC
49
+ ```
50
+
51
+ ### REST Server
52
+
53
+ In command line, run
54
+ ```bash
55
+ mlc_llm serve HF://mlc-ai/AMKCode/Qwen2.5-0.5B-q4f16_1-MLC
56
+ ```
57
+
58
+ ### Python API
59
+
60
+ ```python
61
+ from mlc_llm import MLCEngine
62
+
63
+ # Create engine
64
+ model = "HF://mlc-ai/AMKCode/Qwen2.5-0.5B-q4f16_1-MLC"
65
+ engine = MLCEngine(model)
66
+
67
+ # Run chat completion in OpenAI API.
68
+ for response in engine.chat.completions.create(
69
+ messages=[{"role": "user", "content": "What is the meaning of life?"}],
70
+ model=model,
71
+ stream=True,
72
+ ):
73
+ for choice in response.choices:
74
+ print(choice.delta.content, end="", flush=True)
75
+ print("\n")
76
+
77
+ engine.terminate()
78
+ ```
79
+
80
+ ## Documentation
81
+
82
  For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).