Qwen3-14B-abliterated-TIES

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using Qwen/Qwen3-14B-Base as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: huihui-ai/Qwen3-14B-abliterated
    parameters:
      weight: 1
      density: 1
merge_method: ties
base_model: Qwen/Qwen3-14B-Base
parameters:
  weight: 1
  density: 1
  normalize: true
  int8_mask: true
dtype: bfloat16

Reasoning Fix

The abliteration and merge caused an issue where the <think> token would not always be properly selected. This was fixed by using the vector from Qwen/Qwen3-14B.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

# paths
src = "Qwen/Qwen3-14B"
tgt = "TARGET_MODEL"
out = "OUTPUT_DIR"

tok_tag = "<think>"

# load
src_tok = AutoTokenizer.from_pretrained(src)
tgt_tok = AutoTokenizer.from_pretrained(tgt)
src_model = AutoModelForCausalLM.from_pretrained(src, torch_dtype="auto", device_map="cpu")
tgt_model = AutoModelForCausalLM.from_pretrained(tgt, torch_dtype="auto", device_map="cpu")

# ids (don’t hard-code, trust the tokenizer)
sid = src_tok.convert_tokens_to_ids(tok_tag)
tid = tgt_tok.convert_tokens_to_ids(tok_tag)

if tid == src_tok.unk_token_id:
    # tgt lost the token – add it back, resize, grab new id
    tgt_tok.add_tokens([tok_tag])
    tid = tgt_tok.convert_tokens_to_ids(tok_tag)
    tgt_model.resize_token_embeddings(len(tgt_tok))

# copy the vec
with torch.no_grad():
    tgt_model.get_input_embeddings().weight[tid].copy_(
        src_model.get_input_embeddings().weight[sid]
    )

# optional blend instead of overwrite
# tgt_vec = tgt_model.get_input_embeddings().weight[tid]
# tgt_model.get_input_embeddings().weight[tid].copy_(0.7*src_vec + 0.3*tgt_vec)

# save
tgt_model.save_pretrained(out)
tgt_tok.save_pretrained(out)
Downloads last month
27
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nbeerbower/Qwen3-14B-abliterated-TIES