The Karcher merge method does not require the use of a base model. Click here for details.

Model Highlights:

  • merge method: karcher

  • Highest precision: dtype: float32 + out_dtype: bfloat16

  • Brand-new chat template: ensures normal operation on LM Studio

  • Context length: 32768

Model Selection Table:

Warning: Models with 128K context may have slight quality loss. In most cases, please use the 32K native context!

Parameter Settings:

Thinking Mode:

Temperature=0.6, TopP=0.95, TopK=20,MinP=0.

Configuration:

The following YAML configuration was used to produce this model:

models:
  - model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
  - model: AXCXEPT/Qwen3-EZO-8B-beta
merge_method: karcher
parameters:
  max_iter: 1000
dtype: float32
out_dtype: bfloat16
tokenizer_source: AXCXEPT/Qwen3-EZO-8B-beta
Downloads last month
8
Safetensors
Model size
8.19B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for YOYO-AI/Qwen3-EZO-8B-YOYO-karcher

Collection including YOYO-AI/Qwen3-EZO-8B-YOYO-karcher