image/png

Phr00tyMix-v4-32B

Phr00tyMix-v3 did increase creativity, but at the expense of some of its instruction following and coherency. This mix is intended to fix that, which should improve its storytelling and obediency. This model is still very creative, uncensored (when asked to be) and smart.

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using Phr00t/Phr00tyMix-v3-32B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: model_stock
base_model: Phr00t/Phr00tyMix-v3-32B
dtype: bfloat16
models:
  - model: Delta-Vector/Archaeo-32B-KTO
  - model: allura-org/Qwen2.5-32b-RP-Ink
  - model: arcee-ai/Virtuoso-Medium-v2
  - model: Phr00t/Phr00tyMix-v2-32B
  - model: nicoboss/DeepSeek-R1-Distill-Qwen-32B-Uncensored
tokenizer:
  source: "Delta-Vector/Archaeo-32B-KTO"
Downloads last month
158
GGUF
Model size
32.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Phr00t/Phr00tyMix-v4-32B-GGUF

Quantized
(3)
this model