image/png

EsotericSage-12B

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the NearSwap merge method using yamatazen/LinearWriter-12B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: nearswap
dtype: bfloat16
out_dtype: bfloat16
base_model: yamatazen/LinearWriter-12B
models:
  - model: yamatazen/ForgottenMaid-12B
parameters:
  t: [0.0001, 0.0003, 0.0005, 0.0003, 0.0001]
Downloads last month
73
Safetensors
Model size
12.2B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for yamatazen/EsotericSage-12B

Merge model
this model
Quantizations
2 models