MiniusLight-24B-v3

12B - 24B-v1 - 24B-v1.01 - 24B-v2 - 24B-v2.1 - 24B-v3

cover image Origin Content (Click Here)

What is this?

Maybe this is last 24B Mistral model of this series. I'm tired (laugh).

Thanks for two base models, this model archive very good styles and consistency in long context. 30th test btw, that mean there are 29 models fail to find and create this model.

Best model of the series (for me). :)

GGUF

Static - iMatrix

Other information

Chat Template? Mistral V7 - Tekken. ChatML are also good to use, but Mistral V7 - Tekken is recommend

Merge Method

Detail YAML Config
  {
  models:
    - model: TheDrummer/Cydonia-24B-v4.1
    - model: Delta-Vector/Rei-24B-KTO
  merge_method: slerp
  base_model: TheDrummer/Cydonia-24B-v4.1
  parameters:
    t: [0.1, 0.2, 0.3, 0.5, 0.8, 0.5, 0.3, 0.2, 0.1]
  dtype: bfloat16
  tokenizer_source: base
  }
              

Downloads last month
10
Safetensors
Model size
23.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DoppelReflEx/MiniusLight-24B-v3

Merge model
this model
Quantizations
3 models

Collection including DoppelReflEx/MiniusLight-24B-v3