--- base_model: TareksLab/Alkahest-V3.2-LLaMa-70B language: - en library_name: transformers quantized_by: ArtusDev base_model_relation: quantized tags: - mergekit - merge license: llama3 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [TareksLab/Stylizer-V1-LLaMa-70B](https://huggingface.co/TareksLab/Stylizer-V1-LLaMa-70B) as a base. ### Models Merged The following models were included in the merge: * [TareksLab/Wordsmith-V9-LLaMa-70B](https://huggingface.co/TareksLab/Wordsmith-V9-LLaMa-70B) * [TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B](https://huggingface.co/TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B) * [TareksLab/Malediction-V1-LLaMa-70B](https://huggingface.co/TareksLab/Malediction-V1-LLaMa-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TareksLab/Wordsmith-V9-LLaMa-70B parameters: weight: 0.25 density: 0.5 - model: TareksLab/Malediction-V1-LLaMa-70B parameters: weight: 0.25 density: 0.5 - model: TareksLab/Stylizer-V1-LLaMa-70B parameters: weight: 0.25 density: 0.5 - model: TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B parameters: weight: 0.25 density: 0.5 merge_method: dare_ties base_model: TareksLab/Stylizer-V1-LLaMa-70B parameters: normalize: false out_dtype: bfloat16 chat_template: llama3 tokenizer: source: base ```