Enhance model card: Add abstract and project page
Browse filesThis PR significantly enhances the model card by adding the paper's abstract for better context and including the official project homepage link (`https://powerinfer.ai/`), providing users with a more comprehensive overview and additional resources for the SmallThinker model.
README.md
CHANGED
@@ -1,15 +1,20 @@
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
|
|
4 |
license: apache-2.0
|
5 |
pipeline_tag: text-generation
|
6 |
-
library_name: transformers
|
7 |
---
|
8 |
|
9 |
# SmallThinker: A Family of Efficient Large Language Models Natively Trained for Local Deployment
|
10 |
|
11 |
**Paper**: [SmallThinker: A Family of Efficient Large Language Models Natively Trained for Local Deployment](https://huggingface.co/papers/2507.20984)
|
12 |
**Code**: [https://github.com/SJTU-IPADS/SmallThinker](https://github.com/SJTU-IPADS/SmallThinker)
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
## Introduction
|
15 |
|
@@ -42,14 +47,14 @@ All models are evaluated in non-thinking mode.
|
|
42 |
|
43 |
|
44 |
## Speed
|
45 |
-
| Model
|
46 |
-
|
47 |
-
| SmallThinker 4B+sparse ffn +sparse lm_head
|
48 |
-
| SmallThinker 4B+sparse ffn +sparse lm_head+limited memory | limit 1G| 29.99
|
49 |
-
| Qwen3 0.6B
|
50 |
-
| Qwen3 1.7B
|
51 |
-
| Qwen3 1.7B+limited memory
|
52 |
-
| Gemma3n E2B
|
53 |
|
54 |
Note: i9 14900, 1+13 8ge4 use 4 threads, others use the number of threads that can achieve the maximum speed. All models here have been quantized to q4_0.
|
55 |
|
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
4 |
+
library_name: transformers
|
5 |
license: apache-2.0
|
6 |
pipeline_tag: text-generation
|
|
|
7 |
---
|
8 |
|
9 |
# SmallThinker: A Family of Efficient Large Language Models Natively Trained for Local Deployment
|
10 |
|
11 |
**Paper**: [SmallThinker: A Family of Efficient Large Language Models Natively Trained for Local Deployment](https://huggingface.co/papers/2507.20984)
|
12 |
**Code**: [https://github.com/SJTU-IPADS/SmallThinker](https://github.com/SJTU-IPADS/SmallThinker)
|
13 |
+
**Project Page**: [https://powerinfer.ai/](https://powerinfer.ai/)
|
14 |
+
|
15 |
+
## Abstract
|
16 |
+
|
17 |
+
While frontier large language models (LLMs) continue to push capability boundaries, their deployment remains confined to GPU-powered cloud infrastructure. We challenge this paradigm with SmallThinker, a family of LLMs natively designed - not adapted - for the unique constraints of local devices: weak computational power, limited memory, and slow storage. Unlike traditional approaches that mainly compress existing models built for clouds, we architect SmallThinker from the ground up to thrive within these limitations. Our innovation lies in a deployment-aware architecture that transforms constraints into design principles. First, We introduce a two-level sparse structure combining fine-grained Mixture-of-Experts (MoE) with sparse feed-forward networks, drastically reducing computational demands without sacrificing model capacity. Second, to conquer the I/O bottleneck of slow storage, we design a pre-attention router that enables our co-designed inference engine to prefetch expert parameters from storage while computing attention, effectively hiding storage latency that would otherwise cripple on-device inference. Third, for memory efficiency, we utilize NoPE-RoPE hybrid sparse attention mechanism to slash KV cache requirements. We release SmallThinker-4B-A0.6B and SmallThinker-21B-A3B, which achieve state-of-the-art performance scores and even outperform larger LLMs. Remarkably, our co-designed system mostly eliminates the need for expensive GPU hardware: with Q4_0 quantization, both models exceed 20 tokens/s on ordinary consumer CPUs, while consuming only 1GB and 8GB of memory respectively. SmallThinker is publicly available at this http URL and this http URL .
|
18 |
|
19 |
## Introduction
|
20 |
|
|
|
47 |
|
48 |
|
49 |
## Speed
|
50 |
+
| Model | Memory(GiB) | i9 14900 | 1+13 8gen4 | rk3588 (16G) | rk3576 | Raspberry PI 5 | RDK X5 | rk3566 |
|
51 |
+
|---|---|---|---|---|---|---|---|---|
|
52 |
+
| SmallThinker 4B+sparse ffn +sparse lm_head | 2.24 | 108.17 | 78.99 | 39.76 | 15.10 | 28.77 | 7.23 | 6.33 |
|
53 |
+
| SmallThinker 4B+sparse ffn +sparse lm_head+limited memory | limit 1G| 29.99 | 20.91 | 15.04 | 2.60 | 0.75 | 0.67 | 0.74 |
|
54 |
+
| Qwen3 0.6B | 0.6 | 148.56 | 94.91 | 45.93 | 15.29 | 27.44 | 13.32 | 9.76 |
|
55 |
+
| Qwen3 1.7B | 1.3 | 62.24 | 41.00 | 20.29 | 6.09 | 11.08 | 6.35 | 4.15 |
|
56 |
+
| Qwen3 1.7B+limited memory | limit 1G | 2.66 | 1.09 | 1.00 | 0.47 | - | - | 0.11 |
|
57 |
+
| Gemma3n E2B | 1G, theoretically | 36.88 | 27.06 | 12.50 | 3.80 | 6.66 | 3.80 | 2.45 |
|
58 |
|
59 |
Note: i9 14900, 1+13 8ge4 use 4 threads, others use the number of threads that can achieve the maximum speed. All models here have been quantized to q4_0.
|
60 |
|