Velvet-2B-GGUF / README.md
DagMeow's picture
Update README.md
0487a29 verified
metadata
base_model:
  - Almawave/Velvet-2B
base_model_relation: quantized

DESCRIPTION

Velvet-2B converted to GGUF format (F32) with fbuciuni90/llama.cpp fork and quantized with ggerganov/llama.cpp commit b4689.

NOTE: The Velvet tokenizer is not yet compatible with ggerganov/llama.cpp. Please wait for pull request #11716 to be merged, or compile it yourself.

Original Model: https://huggingface.co/Almawave/Velvet-2B

PROMPT FORMAT

Basic prompt format:

<s><instruction>{prompt}</instruction>

Prompt format with system message:

<s><instruction>{system_prompt}\n\n{prompt}</instruction>

DOWNLOAD

BYE :3