DeepSeek-R1-Qwen-lorablated-32B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using deepseek-ai/DeepSeek-R1-Distill-Qwen-32B + nbeerbower/Qwen2.5-32B-abliterated-LORA as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B+nbeerbower/Qwen2.5-32B-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: false
slices:
- sources:
- layer_range: [0, 64]
model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B+nbeerbower/Qwen2.5-32B-abliterated-LORA
parameters:
weight: 1.0
- Downloads last month
- 83
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for nbeerbower/DeepSeek-R1-Qwen-lorablated-32B
Merge model
this model