image/jpeg

Qwen2.5-14B-YOYO-V5

Qwen2.5-YOYO Fifth-Gen Model Officially Released!

Upgrade Points:

1. Integrate Light-R1-14B-DS

2. Optimize the model merging formula

First stage:

models:  
  - model: tanliboy/lambda-qwen2.5-14b-dpo-test  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: Qwen/Qwen2.5-14B-Instruct  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: float16  
tokenizer_source: base  
name: Qwen2.5-14B-dpo-it

Second stage:

Step 1:

Create three different instruction models and one code model

models:  
  - model: mergekit-community/Qwen2.5-14B-dpo-it  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9
  - model: Qwen/Qwen2.5-14B-Instruct-1M  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: arcee-ai/Virtuoso-Small-v2  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: float16  
tokenizer_source: base  
name: Qwen2.5-14B-della-v2-dpo
models:  
  - model: mergekit-community/Qwen2.5-14B-dpo-it  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9
  - model: Qwen/Qwen2.5-14B-Instruct-1M  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: Azure99/Blossom-V6-14B  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: float16  
tokenizer_source: base  
name: Qwen2.5-14B-della-V6-dpo
models:  
  - model: mergekit-community/Qwen2.5-14B-dpo-it  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9
  - model: Qwen/Qwen2.5-14B-Instruct-1M  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: arcee-ai/SuperNova-Medius  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: float16  
tokenizer_source: base  
name: Qwen2.5-14B-della-Nova-dpo
models:  
  - model: Qwen/Qwen2.5-Coder-14B-Instruct  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: Qwen/Qwen2.5-Coder-14B  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: float16  
tokenizer_source: base  
name: Qwen2.5-14B-della-code

Step 2:

Create two different reasoning models.

merge_method: model_stock
base_model: arcee-ai/Virtuoso-Small-v2
models:
  - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
  - model: qihoo360/Light-R1-14B-DS
dtype: float16
tokenizer_source: base
int8_mask: true
normalize: true
name: Qwen2.5-14B-YOYO-DS-v2
merge_method: model_stock
base_model: Azure99/Blossom-V6-14B
models:
  - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
  - model: qihoo360/Light-R1-14B-DS
dtype: float16
tokenizer_source: base
int8_mask: true
normalize: true
name: Qwen2.5-14B-YOYO-DS-V6

Third stage:

Create a base model with a context of 1 million tokens.

merge_method: sce  
models:
  # Pivot model
  - model: Qwen/Qwen2.5-14B-Instruct-1M
  # Target models  
  - model: Qwen/Qwen2.5-14B  
base_model: Qwen/Qwen2.5-14B-Instruct-1M  
parameters:  
  select_topk: 1  
dtype: float16  
tokenizer_source: base  
normalize: true  
int8_mask: true  
name: Qwen2.5-14B-1M
models:  
  - model: mergekit-community/Qwen2.5-14B-dpo-it  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9
  - model: Qwen/Qwen2.5-14B-Instruct-1M  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: mergekit-community/Qwen2.5-14B-1M  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: float16  
tokenizer_source: base  
name: Qwen2.5-14B-della-1M-dpo

Final stage:

merge_method: model_stock
base_model: mergekit-community/Qwen2.5-14B-della-1M-dpo
models:
  - model: mergekit-community/Qwen2.5-14B-della-v2-dpo
  - model: mergekit-community/Qwen2.5-14B-della-V6-dpo
  - model: mergekit-community/Qwen2.5-14B-della-Nova-dpo
  - model: mergekit-community/Qwen2.5-14B-della-1M-dpo
  - model: mergekit-community/Qwen2.5-14B-YOYO-DS-v2
  - model: mergekit-community/Qwen2.5-14B-YOYO-DS-V6
  - model: mergekit-community/Qwen2.5-14B-della-code
dtype: float16
tokenizer_source: base
int8_mask: true
normalize: true
name: Qwen2.5-14B-YOYO-V5
Downloads last month
33
Safetensors
Model size
14.8B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for YOYO-AI/Qwen2.5-14B-YOYO-V5

Collection including YOYO-AI/Qwen2.5-14B-YOYO-V5