A simple unalignment fine-tune on ~900k tokens aiming to make the model more compliant and willing to handle user requests.

This is the same unalignment training seen in concedo/Beepo-22B, so big thanks to concedo for the dataset.

Chat template is same as the original, ChatML.

Downloads last month
277
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for ToastyPigeon/Qwen2.5-14B-Instruct-1M-Unalign

Base model

Qwen/Qwen2.5-14B
Finetuned
(9)
this model
Merges
3 models

Collection including ToastyPigeon/Qwen2.5-14B-Instruct-1M-Unalign