|
--- |
|
base_model: bigcode/starcoder2-3b |
|
datasets: |
|
- bigcode/the-stack-v2-train |
|
library_name: transformers |
|
license: bigcode-openrail-m |
|
pipeline_tag: text-generation |
|
tags: |
|
- code |
|
- llama-cpp |
|
- matrixportal |
|
inference: true |
|
widget: |
|
- text: 'def print_hello_world():' |
|
example_title: Hello world |
|
group: Python |
|
model-index: |
|
- name: starcoder2-3b |
|
results: |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: CruxEval-I |
|
type: cruxeval-i |
|
metrics: |
|
- type: pass@1 |
|
value: 32.7 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: DS-1000 |
|
type: ds-1000 |
|
metrics: |
|
- type: pass@1 |
|
value: 25.0 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: GSM8K (PAL) |
|
type: gsm8k-pal |
|
metrics: |
|
- type: accuracy |
|
value: 27.7 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: HumanEval+ |
|
type: humanevalplus |
|
metrics: |
|
- type: pass@1 |
|
value: 27.4 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: HumanEval |
|
type: humaneval |
|
metrics: |
|
- type: pass@1 |
|
value: 31.7 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: RepoBench-v1.1 |
|
type: repobench-v1.1 |
|
metrics: |
|
- type: edit-smiliarity |
|
value: 71.19 |
|
--- |
|
|
|
# ysn-rfd/starcoder2-3b-GGUF |
|
This model was converted to GGUF format from [`bigcode/starcoder2-3b`](https://huggingface.co/bigcode/starcoder2-3b) using llama.cpp via the ggml.ai's [all-gguf-same-where](https://huggingface.co/spaces/matrixportal/all-gguf-same-where) space. |
|
Refer to the [original model card](https://huggingface.co/bigcode/starcoder2-3b) for more details on the model. |
|
|
|
## β
Quantized Models Download List |
|
|
|
### π Recommended Quantizations |
|
- **β¨ General CPU Use:** [`Q4_K_M`](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_k_m.gguf) (Best balance of speed/quality) |
|
- **π± ARM Devices:** [`Q4_0`](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_0.gguf) (Optimized for ARM CPUs) |
|
- **π Maximum Quality:** [`Q8_0`](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q8_0.gguf) (Near-original quality) |
|
|
|
### π¦ Full Quantization Options |
|
| π Download | π’ Type | π Notes | |
|
|:---------|:-----|:------| |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q2_k.gguf) |  | Basic quantization | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q3_k_s.gguf) |  | Small size | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q3_k_m.gguf) |  | Balanced quality | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q3_k_l.gguf) |  | Better quality | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_0.gguf) |  | Fast on ARM | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_k_s.gguf) |  | Fast, recommended | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_k_m.gguf) |  β | Best balance | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q5_0.gguf) |  | Good quality | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q5_k_s.gguf) |  | Balanced | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q5_k_m.gguf) |  | High quality | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q6_k.gguf) |  π | Very good quality | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q8_0.gguf) |  β‘ | Fast, best quality | |
|
| [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-f16.gguf) |  | Maximum accuracy | |
|
|
|
π‘ **Tip:** Use `F16` for maximum precision when quality is critical |
|
|
|
|
|
--- |
|
# π Applications and Tools for Locally Quantized LLMs |
|
## π₯οΈ Desktop Applications |
|
|
|
| Application | Description | Download Link | |
|
|-----------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| |
|
| **Llama.cpp** | A fast and efficient inference engine for GGUF models. | [GitHub Repository](https://github.com/ggml-org/llama.cpp) | |
|
| **Ollama** | A streamlined solution for running LLMs locally. | [Website](https://ollama.com/) | |
|
| **AnythingLLM** | An AI-powered knowledge management tool. | [GitHub Repository](https://github.com/Mintplex-Labs/anything-llm) | |
|
| **Open WebUI** | A user-friendly web interface for running local LLMs. | [GitHub Repository](https://github.com/open-webui/open-webui) | |
|
| **GPT4All** | A user-friendly desktop application supporting various LLMs, compatible with GGUF models. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) | |
|
| **LM Studio** | A desktop application designed to run and manage local LLMs, supporting GGUF format. | [Website](https://lmstudio.ai/) | |
|
| **GPT4All Chat**| A chat application compatible with GGUF models for local, offline interactions. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) | |
|
|
|
--- |
|
|
|
## π± Mobile Applications |
|
|
|
| Application | Description | Download Link | |
|
|-------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| |
|
| **ChatterUI** | A simple and lightweight LLM app for mobile devices. | [GitHub Repository](https://github.com/Vali-98/ChatterUI) | |
|
| **Maid** | Mobile Artificial Intelligence Distribution for running AI models on mobile devices. | [GitHub Repository](https://github.com/Mobile-Artificial-Intelligence/maid) | |
|
| **PocketPal AI** | A mobile AI assistant powered by local models. | [GitHub Repository](https://github.com/a-ghorbani/pocketpal-ai) | |
|
| **Layla** | A flexible platform for running various AI models on mobile devices. | [Website](https://www.layla-network.ai/) | |
|
|
|
--- |
|
|
|
## π¨ Image Generation Applications |
|
|
|
| Application | Description | Download Link | |
|
|-------------------------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| |
|
| **Stable Diffusion** | An open-source AI model for generating images from text. | [GitHub Repository](https://github.com/CompVis/stable-diffusion) | |
|
| **Stable Diffusion WebUI** | A web application providing access to Stable Diffusion models via a browser interface. | [GitHub Repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) | |
|
| **Local Dream** | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. | [GitHub Repository](https://github.com/xororz/local-dream) | |
|
| **Stable-Diffusion-Android (SDAI)** | An open-source AI art application for Android devices, enabling digital art creation. | [GitHub Repository](https://github.com/ShiftHackZ/Stable-Diffusion-Android) | |
|
|
|
--- |
|
|
|
|