Magnolia-Mell-v1-12B

This is a merge of pre-trained language models created using mergekit.

An asymmetric gradient SLERP was used to lightly apply MN-12B-Mag-Mell-R1 to Magnolia-v3-12B.

Tested for narrative text completion with temperature=1.0 and minP=0.02. Coherence is fairly high, though there may be occasional slips. If repetition is a problem, raising temperature briefly may help. The model appears to tolerate temperature=2.0 even.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: grimjim/Magnolia-v3-12B
dtype: bfloat16
merge_method: slerp
slices:
- sources:
  - model: grimjim/Magnolia-v3-12B
    layer_range: [0,40]
  - model: inflatebot/MN-12B-Mag-Mell-R1
    layer_range: [0,40]
parameters:
  t:
    - filter: self_attn
      value: [0.0,0.09]
    - filter: mlp
      value: [0.09,0.0]
    - value: [0.0,0.09]
Downloads last month
11
Safetensors
Model size
12.2B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for grimjim/Magnolia-Mell-v1-12B

Merge model
this model
Quantizations
1 model