Casual-Autopsy's picture
Update README.md
c6b300c verified
---
base_model:
- TheDrummer/Cydonia-24B-v2
- Undi95/MistralThinker-v1.1
- ReadyArt/Forgotten-Safeword-24B-V3.0
library_name: transformers
tags:
- mergekit
- merge
---
# Cydonia-ForgottenThoughts-24B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [TheDrummer/Cydonia-24B-v2](https://huggingface.co/TheDrummer/Cydonia-24B-v2) as a base.
### Models Merged
The following models were included in the merge:
* [Undi95/MistralThinker-v1.1](https://huggingface.co/Undi95/MistralThinker-v1.1)
* [ReadyArt/Forgotten-Safeword-24B-v3.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-v3.0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
# Base model
- model: TheDrummer/Cydonia-24B-v2
# Merge models
- model: ReadyArt/Forgotten-Safeword-24B-v3.0
parameters:
density:
- filter: self_attn
value: 0.75
- value: 0.5
weight:
- filter: self_attn
value: [0.5, 0.35, 0.5, 0.35, 0.5]
- filter: mlp
value: [0.25, 0.35, 0.25, 0.35, 0.25]
- value: 0.35
lambda:
- filter: self_attn
value: 1.05
- value: 0.9
- model: Undi95/MistralThinker-v1.1
parameters:
density:
- filter: mlp
value: 0.75
- value: 0.5
weight:
- filter: mlp
value: [0.5, 0.35, 0.5, 0.35, 0.5]
- filter: self_attn
value: [0.25, 0.35, 0.25, 0.35, 0.25]
- value: 0.35
lambda:
- filter: mlp
value: 1.05
- value: 0.9
merge_method: ties
base_model: TheDrummer/Cydonia-24B-v2
parameters:
normalize: true
dtype: float32
```