--- base_model: cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B datasets: - OpenCoder-LLM/opc-sft-stage1 - OpenCoder-LLM/opc-sft-stage2 - microsoft/orca-agentinstruct-1M-v1 - microsoft/orca-math-word-problems-200k - NousResearch/hermes-function-calling-v1 - AI-MO/NuminaMath-CoT - AI-MO/NuminaMath-TIR - allenai/tulu-3-sft-mixture - cognitivecomputations/dolphin-coder - HuggingFaceTB/smoltalk - cognitivecomputations/samantha-data - m-a-p/CodeFeedback-Filtered-Instruction - m-a-p/Code-Feedback language: - en license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE tags: - llama-cpp - matrixportal --- # ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B) using llama.cpp via the ggml.ai's [all-gguf-same-where](https://huggingface.co/spaces/matrixportal/all-gguf-same-where) space. Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B) for more details on the model. ## ✅ Quantized Models Download List ### 🔍 Recommended Quantizations - **✨ General CPU Use:** [`Q4_K_M`](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q4_k_m.gguf) (Best balance of speed/quality) - **📱 ARM Devices:** [`Q4_0`](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q4_0.gguf) (Optimized for ARM CPUs) - **🏆 Maximum Quality:** [`Q8_0`](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q8_0.gguf) (Near-original quality) ### 📦 Full Quantization Options | 🚀 Download | 🔢 Type | 📝 Notes | |:---------|:-----|:------| | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q2_k.gguf) | ![Q2_K](https://img.shields.io/badge/Q2_K-1A73E8) | Basic quantization | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q3_k_s.gguf) | ![Q3_K_S](https://img.shields.io/badge/Q3_K_S-34A853) | Small size | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q3_k_m.gguf) | ![Q3_K_M](https://img.shields.io/badge/Q3_K_M-FBBC05) | Balanced quality | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q3_k_l.gguf) | ![Q3_K_L](https://img.shields.io/badge/Q3_K_L-4285F4) | Better quality | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q4_0.gguf) | ![Q4_0](https://img.shields.io/badge/Q4_0-EA4335) | Fast on ARM | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q4_k_s.gguf) | ![Q4_K_S](https://img.shields.io/badge/Q4_K_S-673AB7) | Fast, recommended | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q4_k_m.gguf) | ![Q4_K_M](https://img.shields.io/badge/Q4_K_M-673AB7) ⭐ | Best balance | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q5_0.gguf) | ![Q5_0](https://img.shields.io/badge/Q5_0-FF6D01) | Good quality | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q5_k_s.gguf) | ![Q5_K_S](https://img.shields.io/badge/Q5_K_S-0F9D58) | Balanced | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q5_k_m.gguf) | ![Q5_K_M](https://img.shields.io/badge/Q5_K_M-0F9D58) | High quality | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q6_k.gguf) | ![Q6_K](https://img.shields.io/badge/Q6_K-4285F4) 🏆 | Very good quality | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-q8_0.gguf) | ![Q8_0](https://img.shields.io/badge/Q8_0-EA4335) ⚡ | Fast, best quality | | [Download](https://huggingface.co/ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF/resolve/main/dolphin3.0-qwen2.5-1.5b-f16.gguf) | ![F16](https://img.shields.io/badge/F16-000000) | Maximum accuracy | 💡 **Tip:** Use `F16` for maximum precision when quality is critical --- # 🚀 Applications and Tools for Locally Quantized LLMs ## 🖥️ Desktop Applications | Application | Description | Download Link | |-----------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| | **Llama.cpp** | A fast and efficient inference engine for GGUF models. | [GitHub Repository](https://github.com/ggml-org/llama.cpp) | | **Ollama** | A streamlined solution for running LLMs locally. | [Website](https://ollama.com/) | | **AnythingLLM** | An AI-powered knowledge management tool. | [GitHub Repository](https://github.com/Mintplex-Labs/anything-llm) | | **Open WebUI** | A user-friendly web interface for running local LLMs. | [GitHub Repository](https://github.com/open-webui/open-webui) | | **GPT4All** | A user-friendly desktop application supporting various LLMs, compatible with GGUF models. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) | | **LM Studio** | A desktop application designed to run and manage local LLMs, supporting GGUF format. | [Website](https://lmstudio.ai/) | | **GPT4All Chat**| A chat application compatible with GGUF models for local, offline interactions. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) | --- ## 📱 Mobile Applications | Application | Description | Download Link | |-------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| | **ChatterUI** | A simple and lightweight LLM app for mobile devices. | [GitHub Repository](https://github.com/Vali-98/ChatterUI) | | **Maid** | Mobile Artificial Intelligence Distribution for running AI models on mobile devices. | [GitHub Repository](https://github.com/Mobile-Artificial-Intelligence/maid) | | **PocketPal AI** | A mobile AI assistant powered by local models. | [GitHub Repository](https://github.com/a-ghorbani/pocketpal-ai) | | **Layla** | A flexible platform for running various AI models on mobile devices. | [Website](https://www.layla-network.ai/) | --- ## 🎨 Image Generation Applications | Application | Description | Download Link | |-------------------------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| | **Stable Diffusion** | An open-source AI model for generating images from text. | [GitHub Repository](https://github.com/CompVis/stable-diffusion) | | **Stable Diffusion WebUI** | A web application providing access to Stable Diffusion models via a browser interface. | [GitHub Repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) | | **Local Dream** | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. | [GitHub Repository](https://github.com/xororz/local-dream) | | **Stable-Diffusion-Android (SDAI)** | An open-source AI art application for Android devices, enabling digital art creation. | [GitHub Repository](https://github.com/ShiftHackZ/Stable-Diffusion-Android) | ---