File size: 9,108 Bytes
885c4a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
base_model: bigcode/starcoder2-3b
datasets:
- bigcode/the-stack-v2-train
library_name: transformers
license: bigcode-openrail-m
pipeline_tag: text-generation
tags:
- code
- llama-cpp
- matrixportal
inference: true
widget:
- text: 'def print_hello_world():'
  example_title: Hello world
  group: Python
model-index:
- name: starcoder2-3b
  results:
  - task:
      type: text-generation
    dataset:
      name: CruxEval-I
      type: cruxeval-i
    metrics:
    - type: pass@1
      value: 32.7
  - task:
      type: text-generation
    dataset:
      name: DS-1000
      type: ds-1000
    metrics:
    - type: pass@1
      value: 25.0
  - task:
      type: text-generation
    dataset:
      name: GSM8K (PAL)
      type: gsm8k-pal
    metrics:
    - type: accuracy
      value: 27.7
  - task:
      type: text-generation
    dataset:
      name: HumanEval+
      type: humanevalplus
    metrics:
    - type: pass@1
      value: 27.4
  - task:
      type: text-generation
    dataset:
      name: HumanEval
      type: humaneval
    metrics:
    - type: pass@1
      value: 31.7
  - task:
      type: text-generation
    dataset:
      name: RepoBench-v1.1
      type: repobench-v1.1
    metrics:
    - type: edit-smiliarity
      value: 71.19
---

# ysn-rfd/starcoder2-3b-GGUF
   This model was converted to GGUF format from [`bigcode/starcoder2-3b`](https://huggingface.co/bigcode/starcoder2-3b) using llama.cpp via the ggml.ai's [all-gguf-same-where](https://huggingface.co/spaces/matrixportal/all-gguf-same-where) space.
Refer to the [original model card](https://huggingface.co/bigcode/starcoder2-3b) for more details on the model.

## βœ… Quantized Models Download List

### πŸ” Recommended Quantizations
- **✨ General CPU Use:** [`Q4_K_M`](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_k_m.gguf) (Best balance of speed/quality)
- **πŸ“± ARM Devices:** [`Q4_0`](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_0.gguf) (Optimized for ARM CPUs)
- **πŸ† Maximum Quality:** [`Q8_0`](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q8_0.gguf) (Near-original quality)

### πŸ“¦ Full Quantization Options
| πŸš€ Download | πŸ”’ Type | πŸ“ Notes |
|:---------|:-----|:------|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q2_k.gguf) | ![Q2_K](https://img.shields.io/badge/Q2_K-1A73E8) | Basic quantization |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q3_k_s.gguf) | ![Q3_K_S](https://img.shields.io/badge/Q3_K_S-34A853) | Small size |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q3_k_m.gguf) | ![Q3_K_M](https://img.shields.io/badge/Q3_K_M-FBBC05) | Balanced quality |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q3_k_l.gguf) | ![Q3_K_L](https://img.shields.io/badge/Q3_K_L-4285F4) | Better quality |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_0.gguf) | ![Q4_0](https://img.shields.io/badge/Q4_0-EA4335) | Fast on ARM |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_k_s.gguf) | ![Q4_K_S](https://img.shields.io/badge/Q4_K_S-673AB7) | Fast, recommended |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_k_m.gguf) | ![Q4_K_M](https://img.shields.io/badge/Q4_K_M-673AB7) ⭐ | Best balance |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q5_0.gguf) | ![Q5_0](https://img.shields.io/badge/Q5_0-FF6D01) | Good quality |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q5_k_s.gguf) | ![Q5_K_S](https://img.shields.io/badge/Q5_K_S-0F9D58) | Balanced |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q5_k_m.gguf) | ![Q5_K_M](https://img.shields.io/badge/Q5_K_M-0F9D58) | High quality |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q6_k.gguf) | ![Q6_K](https://img.shields.io/badge/Q6_K-4285F4) πŸ† | Very good quality |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q8_0.gguf) | ![Q8_0](https://img.shields.io/badge/Q8_0-EA4335) ⚑ | Fast, best quality |
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-f16.gguf) | ![F16](https://img.shields.io/badge/F16-000000) | Maximum accuracy |

πŸ’‘ **Tip:** Use `F16` for maximum precision when quality is critical


---
# πŸš€ Applications and Tools for Locally Quantized LLMs
## πŸ–₯️ Desktop Applications

| Application     | Description                                                                                  | Download Link                                                                  |
|-----------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
| **Llama.cpp**   | A fast and efficient inference engine for GGUF models.                                       | [GitHub Repository](https://github.com/ggml-org/llama.cpp)                     |
| **Ollama**      | A streamlined solution for running LLMs locally.                                             | [Website](https://ollama.com/)                                                 |
| **AnythingLLM** | An AI-powered knowledge management tool.                                                     | [GitHub Repository](https://github.com/Mintplex-Labs/anything-llm)             |
| **Open WebUI**  | A user-friendly web interface for running local LLMs.                                        | [GitHub Repository](https://github.com/open-webui/open-webui)                  |
| **GPT4All**     | A user-friendly desktop application supporting various LLMs, compatible with GGUF models.    | [GitHub Repository](https://github.com/nomic-ai/gpt4all)                       |
| **LM Studio**   | A desktop application designed to run and manage local LLMs, supporting GGUF format.         | [Website](https://lmstudio.ai/)                                                |
| **GPT4All Chat**| A chat application compatible with GGUF models for local, offline interactions.              | [GitHub Repository](https://github.com/nomic-ai/gpt4all)                       |

---

## πŸ“± Mobile Applications

| Application       | Description                                                                                  | Download Link                                                                  |
|-------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
| **ChatterUI**     | A simple and lightweight LLM app for mobile devices.                                         | [GitHub Repository](https://github.com/Vali-98/ChatterUI)                      |
| **Maid**          | Mobile Artificial Intelligence Distribution for running AI models on mobile devices.         | [GitHub Repository](https://github.com/Mobile-Artificial-Intelligence/maid)    |
| **PocketPal AI**  | A mobile AI assistant powered by local models.                                               | [GitHub Repository](https://github.com/a-ghorbani/pocketpal-ai)                |
| **Layla**         | A flexible platform for running various AI models on mobile devices.                         | [Website](https://www.layla-network.ai/)                                       |

---

## 🎨 Image Generation Applications

| Application                         | Description                                                                                  | Download Link                                                                  |
|-------------------------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
| **Stable Diffusion**                | An open-source AI model for generating images from text.                                     | [GitHub Repository](https://github.com/CompVis/stable-diffusion)               |
| **Stable Diffusion WebUI**          | A web application providing access to Stable Diffusion models via a browser interface.       | [GitHub Repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui)   |
| **Local Dream**                     | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference.                                              | [GitHub Repository](https://github.com/xororz/local-dream)                     |
| **Stable-Diffusion-Android (SDAI)** | An open-source AI art application for Android devices, enabling digital art creation.        | [GitHub Repository](https://github.com/ShiftHackZ/Stable-Diffusion-Android)    |

---