Edit model card

QuantFactory/MN-12B-Starcannon-v1-GGUF

This is quantized version of aetherwiing/MN-12B-Starcannon-v1 created using llama.cpp

Original Model Card

Mistral Nemo 12B Starcannon v1

This is a merge of pre-trained language models created using mergekit. Seems to retain Celeste's human-like prose, but is bit more stable and is better at NSFW.

Dynamic FP8
Static GGUFs (by Mradermacher)
IMatrix GGUFs (by Mradermacher)

Merge Details

Merge Method

This model was merged using the TIES merge method using nothingiisreal/Celeste-12B-V1.6 as a base.

Merge fodder

Mergekit config

models:
  - model: intervitens/mini-magnum-12b-v1.1
    parameters:
      density: 0.3
      weight: 0.5
  - model: nothingiisreal/Celeste-12B-V1.6
    parameters:
      density: 0.7
      weight: 0.5

merge_method: ties
base_model: nothingiisreal/Celeste-12B-V1.6
parameters:
  normalize: true
  int8_mask: true
dtype: float16
Downloads last month
49
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for QuantFactory/MN-12B-Starcannon-v1-GGUF