File size: 3,135 Bytes
1b2989f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
base_model:
- Qwen/Qwen2.5-14B
- Krystalan/DRT-o1-14B
- netease-youdao/Confucius-o1-14B
- huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated
- djuna/Q2.5-Veltha-14B-0.5
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) as a base.
### Models Merged
The following models were included in the merge:
* [Krystalan/DRT-o1-14B](https://huggingface.co/Krystalan/DRT-o1-14B)
* [netease-youdao/Confucius-o1-14B](https://huggingface.co/netease-youdao/Confucius-o1-14B)
* [huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated)
* [djuna/Q2.5-Veltha-14B-0.5](https://huggingface.co/djuna/Q2.5-Veltha-14B-0.5)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-14B
- model: netease-youdao/Confucius-o1-14B
- model: djuna/Q2.5-Veltha-14B-0.5
- model: Krystalan/DRT-o1-14B
- model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated
merge_method: sce
base_model: Qwen/Qwen2.5-14B
tokenizer:
source: "union"
tokens:
<|endoftext|>:
source: "djuna/Q2.5-Veltha-14B-0.5"
<|im_start|>:
source: "djuna/Q2.5-Veltha-14B-0.5"
<|im_end|>:
source: "djuna/Q2.5-Veltha-14B-0.5"
<|object_ref_start|>:
source: "djuna/Q2.5-Veltha-14B-0.5"
<|object_ref_end|>:
source: "djuna/Q2.5-Veltha-14B-0.5"
<|box_start|>:
source: "djuna/Q2.5-Veltha-14B-0.5"
<|box_end|>:
source: "djuna/Q2.5-Veltha-14B-0.5"
<|end▁of▁sentence|>:
source:
model: "huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated"
kind: "model_token"
token: "<|end▁of▁sentence|>"
force: true
<|User|>:
source:
model: "huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated"
kind: "model_token"
token: "<|User|>"
force: true
<|Assistant|>:
source:
model: "huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated"
kind: "model_token"
token: "<|Assistant|>"
force: true
<|begin▁of▁sentence|>:
source:
model: "huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated"
kind: "model_token"
token: "<|begin▁of▁sentence|>"
force: true
<|EOT|>:
source:
model: "huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated"
kind: "model_token"
token: "<|EOT|>"
force: true
<think>:
source:
model: "huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated"
kind: "model_token"
token: "<think>"
force: true
</think>:
source:
model: "huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated"
kind: "model_token"
token: "</think>"
force: true
dtype: float32
out_dtype: bfloat16
```
|