File size: 1,449 Bytes
fde1606 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
tags:
- merge
- mergekit
- lazymergekit
- senseable/Westlake-7B-v2
- decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
- mlabonne/NeuralMarcoro14-7B
base_model:
- senseable/Westlake-7B-v2
- decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
- mlabonne/NeuralMarcoro14-7B
license: apache-2.0
---
# WestOrcaNeuralMarco-DPO-v2-DARETIES-7B
WestOrcaNeuralMarco-DPO-v2-DARETIES-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [senseable/Westlake-7B-v2](https://huggingface.co/senseable/Westlake-7B-v2)
* [decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP](https://huggingface.co/decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP)
* [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: senseable/Westlake-7B-v2
parameters:
density: 0.73
weight: 0.4
- model: decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
parameters:
density: 0.55
weight: 0.3
- model: mlabonne/NeuralMarcoro14-7B
parameters:
density: 0.45
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
Credit to Maxime Labonne and his excellent blog [https://mlabonne.github.io/blog/](https://mlabonne.github.io/blog/). |