merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using TareksLab/Stylizer-V1-LLaMa-70B as a base.
Models Merged
The following models were included in the merge:
- TareksLab/Wordsmith-V9-LLaMa-70B
- TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B
- TareksLab/Malediction-V1-LLaMa-70B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: TareksLab/Wordsmith-V9-LLaMa-70B
parameters:
weight: 0.25
density: 0.5
- model: TareksLab/Malediction-V1-LLaMa-70B
parameters:
weight: 0.25
density: 0.5
- model: TareksLab/Stylizer-V1-LLaMa-70B
parameters:
weight: 0.25
density: 0.5
- model: TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B
parameters:
weight: 0.25
density: 0.5
merge_method: dare_ties
base_model: TareksLab/Stylizer-V1-LLaMa-70B
parameters:
normalize: false
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: base
- Downloads last month
- 191
Hardware compatibility
Log In
to view the estimation
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ArtusDev/TareksTesting_Alkahest-V3.2-LLaMa-70B-GGUF
Base model
TareksTesting/Alkahest-V3.2-LLaMa-70B