Phr00tyMix-v4-32B
Phr00tyMix-v3 did increase creativity, but at the expense of some of its instruction following and coherency. This mix is intended to fix that, which should improve its storytelling and obediency. This model is still very creative, uncensored (when asked to be) and smart.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using Phr00t/Phr00tyMix-v3-32B as a base.
Models Merged
The following models were included in the merge:
- allura-org/Qwen2.5-32b-RP-Ink
- nicoboss/DeepSeek-R1-Distill-Qwen-32B-Uncensored
- Delta-Vector/Archaeo-32B-KTO
- arcee-ai/Virtuoso-Medium-v2
- Phr00t/Phr00tyMix-v2-32B
Configuration
The following YAML configuration was used to produce this model:
merge_method: model_stock
base_model: Phr00t/Phr00tyMix-v3-32B
dtype: bfloat16
models:
- model: Delta-Vector/Archaeo-32B-KTO
- model: allura-org/Qwen2.5-32b-RP-Ink
- model: arcee-ai/Virtuoso-Medium-v2
- model: Phr00t/Phr00tyMix-v2-32B
- model: nicoboss/DeepSeek-R1-Distill-Qwen-32B-Uncensored
tokenizer:
source: "Delta-Vector/Archaeo-32B-KTO"
- Downloads last month
- 158
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Phr00t/Phr00tyMix-v4-32B-GGUF
Base model
Phr00t/Phr00tyMix-v4-32B