--- base_model: bigcode/starcoder2-3b datasets: - bigcode/the-stack-v2-train library_name: transformers license: bigcode-openrail-m pipeline_tag: text-generation tags: - code - llama-cpp - matrixportal inference: true widget: - text: 'def print_hello_world():' example_title: Hello world group: Python model-index: - name: starcoder2-3b results: - task: type: text-generation dataset: name: CruxEval-I type: cruxeval-i metrics: - type: pass@1 value: 32.7 - task: type: text-generation dataset: name: DS-1000 type: ds-1000 metrics: - type: pass@1 value: 25.0 - task: type: text-generation dataset: name: GSM8K (PAL) type: gsm8k-pal metrics: - type: accuracy value: 27.7 - task: type: text-generation dataset: name: HumanEval+ type: humanevalplus metrics: - type: pass@1 value: 27.4 - task: type: text-generation dataset: name: HumanEval type: humaneval metrics: - type: pass@1 value: 31.7 - task: type: text-generation dataset: name: RepoBench-v1.1 type: repobench-v1.1 metrics: - type: edit-smiliarity value: 71.19 --- # ysn-rfd/starcoder2-3b-GGUF This model was converted to GGUF format from [`bigcode/starcoder2-3b`](https://huggingface.co/bigcode/starcoder2-3b) using llama.cpp via the ggml.ai's [all-gguf-same-where](https://huggingface.co/spaces/matrixportal/all-gguf-same-where) space. Refer to the [original model card](https://huggingface.co/bigcode/starcoder2-3b) for more details on the model. ## ✅ Quantized Models Download List ### 🔍 Recommended Quantizations - **✨ General CPU Use:** [`Q4_K_M`](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_k_m.gguf) (Best balance of speed/quality) - **📱 ARM Devices:** [`Q4_0`](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_0.gguf) (Optimized for ARM CPUs) - **🏆 Maximum Quality:** [`Q8_0`](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q8_0.gguf) (Near-original quality) ### 📦 Full Quantization Options | 🚀 Download | 🔢 Type | 📝 Notes | |:---------|:-----|:------| | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q2_k.gguf) | ![Q2_K](https://img.shields.io/badge/Q2_K-1A73E8) | Basic quantization | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q3_k_s.gguf) | ![Q3_K_S](https://img.shields.io/badge/Q3_K_S-34A853) | Small size | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q3_k_m.gguf) | ![Q3_K_M](https://img.shields.io/badge/Q3_K_M-FBBC05) | Balanced quality | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q3_k_l.gguf) | ![Q3_K_L](https://img.shields.io/badge/Q3_K_L-4285F4) | Better quality | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_0.gguf) | ![Q4_0](https://img.shields.io/badge/Q4_0-EA4335) | Fast on ARM | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_k_s.gguf) | ![Q4_K_S](https://img.shields.io/badge/Q4_K_S-673AB7) | Fast, recommended | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q4_k_m.gguf) | ![Q4_K_M](https://img.shields.io/badge/Q4_K_M-673AB7) ⭐ | Best balance | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q5_0.gguf) | ![Q5_0](https://img.shields.io/badge/Q5_0-FF6D01) | Good quality | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q5_k_s.gguf) | ![Q5_K_S](https://img.shields.io/badge/Q5_K_S-0F9D58) | Balanced | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q5_k_m.gguf) | ![Q5_K_M](https://img.shields.io/badge/Q5_K_M-0F9D58) | High quality | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q6_k.gguf) | ![Q6_K](https://img.shields.io/badge/Q6_K-4285F4) 🏆 | Very good quality | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-q8_0.gguf) | ![Q8_0](https://img.shields.io/badge/Q8_0-EA4335) ⚡ | Fast, best quality | | [Download](https://huggingface.co/ysn-rfd/starcoder2-3b-GGUF/resolve/main/starcoder2-3b-f16.gguf) | ![F16](https://img.shields.io/badge/F16-000000) | Maximum accuracy | 💡 **Tip:** Use `F16` for maximum precision when quality is critical --- # 🚀 Applications and Tools for Locally Quantized LLMs ## 🖥️ Desktop Applications | Application | Description | Download Link | |-----------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| | **Llama.cpp** | A fast and efficient inference engine for GGUF models. | [GitHub Repository](https://github.com/ggml-org/llama.cpp) | | **Ollama** | A streamlined solution for running LLMs locally. | [Website](https://ollama.com/) | | **AnythingLLM** | An AI-powered knowledge management tool. | [GitHub Repository](https://github.com/Mintplex-Labs/anything-llm) | | **Open WebUI** | A user-friendly web interface for running local LLMs. | [GitHub Repository](https://github.com/open-webui/open-webui) | | **GPT4All** | A user-friendly desktop application supporting various LLMs, compatible with GGUF models. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) | | **LM Studio** | A desktop application designed to run and manage local LLMs, supporting GGUF format. | [Website](https://lmstudio.ai/) | | **GPT4All Chat**| A chat application compatible with GGUF models for local, offline interactions. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) | --- ## 📱 Mobile Applications | Application | Description | Download Link | |-------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| | **ChatterUI** | A simple and lightweight LLM app for mobile devices. | [GitHub Repository](https://github.com/Vali-98/ChatterUI) | | **Maid** | Mobile Artificial Intelligence Distribution for running AI models on mobile devices. | [GitHub Repository](https://github.com/Mobile-Artificial-Intelligence/maid) | | **PocketPal AI** | A mobile AI assistant powered by local models. | [GitHub Repository](https://github.com/a-ghorbani/pocketpal-ai) | | **Layla** | A flexible platform for running various AI models on mobile devices. | [Website](https://www.layla-network.ai/) | --- ## 🎨 Image Generation Applications | Application | Description | Download Link | |-------------------------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| | **Stable Diffusion** | An open-source AI model for generating images from text. | [GitHub Repository](https://github.com/CompVis/stable-diffusion) | | **Stable Diffusion WebUI** | A web application providing access to Stable Diffusion models via a browser interface. | [GitHub Repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) | | **Local Dream** | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. | [GitHub Repository](https://github.com/xororz/local-dream) | | **Stable-Diffusion-Android (SDAI)** | An open-source AI art application for Android devices, enabling digital art creation. | [GitHub Repository](https://github.com/ShiftHackZ/Stable-Diffusion-Android) | ---