Tess_brain
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using two base models and their spatially "awared" LORA fused versions.
- Step1 - Turn 2 models into 4 models via LORA
- Step2 - Model Stock
- Step3 - nuslerp1
- Step4 - nuslerp2
- Step5 - nuslerp final against Model stock merge as base.
Configuration - SCHEMATIC
The following YAML configuration was used to produce this model:
merge_method: nuslerp
models:
- model: "D:\\mergekit\\_My_YAMLS\\Tess_brain_mslerp2"
parameters:
weight: 1
- model: "D:\\mergekit\\_My_YAMLS\\Tess_brain_mslerp1"
parameters:
weight: 1
base_model: "D:\\mergekit\\_My_YAMLS\\Tess_brain_stock"
parameters:
normalize: false
int8_mask: true
---
name: Tess_brain_mslerp1
merge_method: nuslerp
models:
- model: TareksTesting/Tesseract-V0.2-LLaMa-70B
parameters:
weight: 1
- model: "D:\\mergekit\\Tess_0_2_VAR_r128"
parameters:
weight: 1
parameters:
normalize: false
int8_mask: true
---
name: Tess_brain_mslerp2
merge_method: nuslerp
models:
- model: "D:\\mergekit\\Tess_2_0_VAR_r128"
parameters:
weight: 1
- model: TareksTesting/Tesseract-V2.0-LLaMa-70B
parameters:
weight: 1
parameters:
normalize: false
int8_mask: true
---
name: Tess_brain_stock
models:
- model: "D:\\mergekit\\Tess_2_0_VAR_r128"
- model: TareksTesting/Tesseract-V2.0-LLaMa-70B
- model: TareksTesting/Tesseract-V0.2-LLaMa-70B
- model: "D:\\mergekit\\Tess_0_2_VAR_r128"
base_model: meta-llama/Llama-3.3-70B-Instruct
merge_method: model_stock
---
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: union
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for schonsense/Tess_VAR_Brain_70B
Merge model
this model