Locutusque/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF

Quantized GGUF model files for LocutusqueXFelladrin-TinyMistral248M-Instruct from Locutusque

Original Model Card:

LocutusqueXFelladrin-TinyMistral248M-Instruct

This model was created by merging Locutusque/TinyMistral-248M-Instruct and Felladrin/TinyMistral-248M-SFT-v4 using mergekit. After the two models were merged, the resulting model was further trained on ~20,000 examples on the Locutusque/inst_mix_v2_top_100k at a low learning rate to further normalize weights. The following is the YAML config used to merge:

models:
  - model: Felladrin/TinyMistral-248M-SFT-v4
    parameters:
      weight: 0.5
  - model: Locutusque/TinyMistral-248M-Instruct
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16

The resulting model combines the best of both worlds. With Locutusque/TinyMistral-248M-Instruct's coding capabilities and reasoning skills, and Felladrin/TinyMistral-248M-SFT-v4's low hallucination and instruction-following capabilities. The resulting model has an incredible performance considering its size.

Evaluation

Coming soon...

Downloads last month
42
GGUF
Model size
248M params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF

Dataset used to train afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF