--- base_model: - Almawave/Velvet-2B base_model_relation: quantized --- ## DESCRIPTION Velvet-2B converted to GGUF format (F32) with fbuciuni90/llama.cpp fork and quantized with ggerganov/llama.cpp commit b4689. **NOTE: The Velvet tokenizer is not yet compatible with ggerganov/llama.cpp.** Please wait for pull request #11716 to be merged, or compile it yourself. Original Model: https://huggingface.co/Almawave/Velvet-2B ## PROMPT FORMAT Basic prompt format: ``` {prompt} ``` Prompt format with system message: ``` {system_prompt}\n\n{prompt} ``` ## DOWNLOAD | Quant | Link | | ----- | ---- | | Q3_K_S | [Velvet-2B-Q3_K_S.gguf](https://huggingface.co/DagMeow/Velvet-2B-GGUF/blob/main/Velvet-2B-Q3_K_S.gguf) | | Q3_K_M | [Velvet-2B-Q3_K_M.gguf](https://huggingface.co/DagMeow/Velvet-2B-GGUF/blob/main/Velvet-2B-Q3_K_M.gguf) | | Q4_K_S | [Velvet-2B-Q4_K_S.gguf](https://huggingface.co/DagMeow/Velvet-2B-GGUF/blob/main/Velvet-2B-Q4_K_S.gguf) | | Q4_K_M | [Velvet-2B-Q4_K_M.gguf](https://huggingface.co/DagMeow/Velvet-2B-GGUF/blob/main/Velvet-2B-Q4_K_M.gguf) | | Q5_K_S | [Velvet-2B-Q5_K_S.gguf](https://huggingface.co/DagMeow/Velvet-2B-GGUF/blob/main/Velvet-2B-Q5_K_S.gguf) | | Q5_K_M | [Velvet-2B-Q5_K_M.gguf](https://huggingface.co/DagMeow/Velvet-2B-GGUF/blob/main/Velvet-2B-Q5_K_M.gguf) | ## BYE :3