GGUF
English

Description

GGUF version of Locutusque/TinyMistral-248M-v2-Instruct.

Recommended inference parameters

do_sample: true
temperature: 0.1
top_p: 0.14
top_k: 12
repetition_penalty: 1.1

Recommended prompt template

<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>assistant\n{assistant message}<|endoftext|>
Downloads last month
2,765
GGUF
Model size
248M params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for M4-ai/TinyMistral-248M-v2-Instruct-GGUF

Quantized
(3)
this model

Dataset used to train M4-ai/TinyMistral-248M-v2-Instruct-GGUF