Qwen3-4B-PlumEsper

PlumEsper is a merged language model created using MergeKit, combining the specialized strengths and general reasoning capabilities of Esper 3 4B and Shining Valiant 3 4B. Built on Qwen/Qwen3-4B as the base, the merge was performed using the DELLA merge method. This fusion brings together the expertise of two powerful models: ValiantLabs/Qwen3-4B-ShiningValiant3 and ValiantLabs/Qwen3-4B-Esper3, resulting in a balanced model with enhanced versatility across a wide range of reasoning tasks.

Model Files

File Name Size Precision
Qwen3-4B-PlumEsper.BF16.gguf 8.05 GB BF16
Qwen3-4B-PlumEsper.F16.gguf 8.05 GB F16
Qwen3-4B-PlumEsper.F32.gguf 16.1 GB F32
Qwen3-4B-PlumEsper.Q2_K.gguf 1.67 GB Q2_K
Qwen3-4B-PlumEsper.Q3_K_L.gguf 2.24 GB Q3_K_L
Qwen3-4B-PlumEsper.Q3_K_M.gguf 2.08 GB Q3_K_M
Qwen3-4B-PlumEsper.Q3_K_S.gguf 1.89 GB Q3_K_S
Qwen3-4B-PlumEsper.Q4_K_M.gguf 2.5 GB Q4_K_M
Qwen3-4B-PlumEsper.Q4_K_S.gguf 2.38 GB Q4_K_S
Qwen3-4B-PlumEsper.Q5_K_M.gguf 2.89 GB Q5_K_M
Qwen3-4B-PlumEsper.Q5_K_S.gguf 2.82 GB Q5_K_S
Qwen3-4B-PlumEsper.Q6_K.gguf 3.31 GB Q6_K
Qwen3-4B-PlumEsper.Q8_0.gguf 4.28 GB Q8_0

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
170
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/Qwen3-4B-PlumEsper-GGUF

Quantized
(1)
this model