Add pipeline tag
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -5,7 +5,9 @@ language:
|
|
5 |
library_name: transformers
|
6 |
license: apache-2.0
|
7 |
quantized_by: PLM-Team
|
|
|
8 |
---
|
|
|
9 |
<center>
|
10 |
<img src="https://www.cdeng.net/plm/plm_logo.png" alt="plm-logo" width="200"/>
|
11 |
<h2>🖲️ PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing</h2>
|
@@ -22,7 +24,7 @@ quantized_by: PLM-Team
|
|
22 |
|
23 |
---
|
24 |
|
25 |
-
The PLM (Peripheral Language Model) series introduces a novel model architecture to peripheral computing by delivering powerful language capabilities within the constraints of resource-limited devices. Through modeling and system co-design strategy, PLM optimizes model performance and fits edge system requirements
|
26 |
|
27 |
|
28 |
**Here we present the static quants of https://huggingface.co/PLM-Team/PLM-1.8B-Instruct**
|
@@ -32,55 +34,29 @@ The PLM (Peripheral Language Model) series introduces a novel model architecture
|
|
32 |
| Link | Type | Size/GB | Notes |
|
33 |
|:-----|:-----|--------:|:------|
|
34 |
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-F16.gguf|F16| 3.66GB| Recommanded|
|
35 |
-
|
|
36 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_L.gguf|Q3_K_L| 1.09 GB| |
|
37 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_M.gguf|Q3_K_M| 1.01 GB| |
|
38 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_S.gguf|Q3_K_S| 912 MB| |
|
39 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_0.gguf|Q4_0| 1.11 GB| |
|
40 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_1.gguf|Q4_1| 1.21 GB| |
|
41 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_K_M.gguf|Q4_K_M| 1.18 GB| Recommanded|
|
42 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_K_S.gguf|Q4_K_S| 1.12 GB| |
|
43 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_0.gguf|Q5_0| 1.3 GB| |
|
44 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_1.gguf|Q5_1| 1.4 GB| |
|
45 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_K_M.gguf|Q5_K_M| 1.34 GB| |
|
46 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_K_S.gguf|Q5_K_S| 1.3 GB| |
|
47 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q6_K.gguf|Q6_K| 1.5 GB| |
|
48 |
-
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q8_0.gguf|Q8_0| 1.95 GB| Recommanded|
|
49 |
-
|
50 |
-
## Usage (llama.cpp)
|
51 |
|
52 |
-
The original contribution to the llama.cpp framwork is [Si1w/llama.cpp](https://github.com/Si1w/llama.cpp). Here is the usage:
|
53 |
|
54 |
-
|
55 |
-
git clone https://github.com/Si1w/llama.cpp.git
|
56 |
-
cd llama.cpp
|
57 |
-
pip install -r requirements.txt
|
58 |
-
```
|
59 |
-
|
60 |
-
Then, we can build with CPU of GPU (e.g. Orin). The build is based on `cmake`.
|
61 |
-
|
62 |
-
- For CPU
|
63 |
|
64 |
-
|
65 |
-
cmake -B build
|
66 |
-
cmake --build build --config Release
|
67 |
-
```
|
68 |
|
69 |
-
- For GPU
|
70 |
|
71 |
-
|
72 |
-
cmake -B build -DGGML_CUDA=ON
|
73 |
-
cmake --build build --config Release
|
74 |
-
```
|
75 |
|
76 |
-
|
|
|
|
|
77 |
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
|
82 |
-
|
|
|
|
|
83 |
|
84 |
-
|
85 |
-
|
86 |
-
|
|
|
|
5 |
library_name: transformers
|
6 |
license: apache-2.0
|
7 |
quantized_by: PLM-Team
|
8 |
+
pipeline_tag: text-generation
|
9 |
---
|
10 |
+
|
11 |
<center>
|
12 |
<img src="https://www.cdeng.net/plm/plm_logo.png" alt="plm-logo" width="200"/>
|
13 |
<h2>🖲️ PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing</h2>
|
|
|
24 |
|
25 |
---
|
26 |
|
27 |
+
The PLM (Peripheral Language Model) series introduces a novel model architecture to peripheral computing by delivering powerful language capabilities within the constraints of resource-limited devices. Through modeling and system co-design strategy, PLM optimizes model performance and fits edge system requirements. PLM employs **Multi-head Latent Attention** and **squared ReLU** activation to achieve sparsity, significantly reducing memory footprint and computational demands. Coupled with a meticulously crafted training regimen using curated datasets and a Warmup-Stable-Decay-Constant learning rate scheduler, PLM demonstrates superior performance compared to existing small language models, all while maintaining the lowest activated parameters, making it ideally suited for deployment on diverse peripheral platforms like mobile phones and Raspberry Pis.
|
28 |
|
29 |
|
30 |
**Here we present the static quants of https://huggingface.co/PLM-Team/PLM-1.8B-Instruct**
|
|
|
34 |
| Link | Type | Size/GB | Notes |
|
35 |
|:-----|:-----|--------:|:------|
|
36 |
|https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-F16.gguf|F16| 3.66GB| Recommanded|
|
37 |
+
| ... | ... | ... | ... | *(table abbreviated for brevity)*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
|
|
39 |
|
40 |
+
## Usage (llama.cpp)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
+
*(Content omitted for brevity - same as original)*
|
|
|
|
|
|
|
43 |
|
|
|
44 |
|
45 |
+
## Usage (transformers)
|
|
|
|
|
|
|
46 |
|
47 |
+
```python
|
48 |
+
import torch
|
49 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
50 |
|
51 |
+
# Load model and tokenizer
|
52 |
+
tokenizer = AutoTokenizer.from_pretrained("PLM-Team/PLM-1.8B-Instruct")
|
53 |
+
model = AutoModelForCausalLM.from_pretrained("PLM-Team/PLM-1.8B-Instruct", torch_dtype=torch.bfloat16)
|
54 |
|
55 |
+
# Input text
|
56 |
+
input_text = "Tell me something about reinforcement learning."
|
57 |
+
inputs = tokenizer(input_text, return_tensors="pt")
|
58 |
|
59 |
+
# Completion
|
60 |
+
output = model.generate(inputs["input_ids"], max_new_tokens=100)
|
61 |
+
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
62 |
+
```
|