GGUF llama.cpp quantized version of:

Recommended Prompt Format (Chat Format)

<|user|>
Provide some context and/or instructions to the model.<|end|>
<|assistant|>
AI message goes here<|end|>
<|user|>
The user’s message goes here<|end|>
<|assistant|>
Downloads last month
14
GGUF
Model size
14B params
Architecture
phi3
Hardware compatibility
Log In to view the estimation

5-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support