ysn-rfd's picture
Upload README.md with huggingface_hub
e5bec1d verified
metadata
base_model: cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B
datasets:
  - OpenCoder-LLM/opc-sft-stage1
  - OpenCoder-LLM/opc-sft-stage2
  - microsoft/orca-agentinstruct-1M-v1
  - microsoft/orca-math-word-problems-200k
  - NousResearch/hermes-function-calling-v1
  - AI-MO/NuminaMath-CoT
  - AI-MO/NuminaMath-TIR
  - allenai/tulu-3-sft-mixture
  - cognitivecomputations/dolphin-coder
  - HuggingFaceTB/smoltalk
  - cognitivecomputations/samantha-data
  - m-a-p/CodeFeedback-Filtered-Instruction
  - m-a-p/Code-Feedback
language:
  - en
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE
tags:
  - llama-cpp
  - matrixportal

ysn-rfd/Dolphin3.0-Qwen2.5-1.5B-GGUF

This model was converted to GGUF format from cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B using llama.cpp via the ggml.ai's all-gguf-same-where space. Refer to the original model card for more details on the model.

βœ… Quantized Models Download List

πŸ” Recommended Quantizations

  • ✨ General CPU Use: Q4_K_M (Best balance of speed/quality)
  • πŸ“± ARM Devices: Q4_0 (Optimized for ARM CPUs)
  • πŸ† Maximum Quality: Q8_0 (Near-original quality)

πŸ“¦ Full Quantization Options

πŸš€ Download πŸ”’ Type πŸ“ Notes
Download Q2_K Basic quantization
Download Q3_K_S Small size
Download Q3_K_M Balanced quality
Download Q3_K_L Better quality
Download Q4_0 Fast on ARM
Download Q4_K_S Fast, recommended
Download Q4_K_M ⭐ Best balance
Download Q5_0 Good quality
Download Q5_K_S Balanced
Download Q5_K_M High quality
Download Q6_K πŸ† Very good quality
Download Q8_0 ⚑ Fast, best quality
Download F16 Maximum accuracy

πŸ’‘ Tip: Use F16 for maximum precision when quality is critical


πŸš€ Applications and Tools for Locally Quantized LLMs

πŸ–₯️ Desktop Applications

Application Description Download Link
Llama.cpp A fast and efficient inference engine for GGUF models. GitHub Repository
Ollama A streamlined solution for running LLMs locally. Website
AnythingLLM An AI-powered knowledge management tool. GitHub Repository
Open WebUI A user-friendly web interface for running local LLMs. GitHub Repository
GPT4All A user-friendly desktop application supporting various LLMs, compatible with GGUF models. GitHub Repository
LM Studio A desktop application designed to run and manage local LLMs, supporting GGUF format. Website
GPT4All Chat A chat application compatible with GGUF models for local, offline interactions. GitHub Repository

πŸ“± Mobile Applications

Application Description Download Link
ChatterUI A simple and lightweight LLM app for mobile devices. GitHub Repository
Maid Mobile Artificial Intelligence Distribution for running AI models on mobile devices. GitHub Repository
PocketPal AI A mobile AI assistant powered by local models. GitHub Repository
Layla A flexible platform for running various AI models on mobile devices. Website

🎨 Image Generation Applications

Application Description Download Link
Stable Diffusion An open-source AI model for generating images from text. GitHub Repository
Stable Diffusion WebUI A web application providing access to Stable Diffusion models via a browser interface. GitHub Repository
Local Dream Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. GitHub Repository
Stable-Diffusion-Android (SDAI) An open-source AI art application for Android devices, enabling digital art creation. GitHub Repository