Lacaille-MoT-4B-Supreme2-GGUF

Lacaille-MoT-4B-Supreme2 is a high-efficiency, multi-domain model fine-tuned on Qwen3-4B using the Mixture of Thoughts (MoT) dataset enhanced with code, math, science expert clusters and an extended open code reasoning dataset. This model blends symbolic precision, scientific logic, and structured output fluency—making it an ideal tool for developers, educators, and researchers seeking advanced reasoning under constrained compute.

Model File Table

File Name Size Format Description
Lacaille-MoT-4B-Supreme2.BF16.gguf 8.05 GB GGUF (BF16) BFloat16 precision model file
Lacaille-MoT-4B-Supreme2.F16.gguf 8.05 GB GGUF (F16) Float16 precision model file
Lacaille-MoT-4B-Supreme2.F32.gguf 16.1 GB GGUF (F32) Float32 precision model file
Lacaille-MoT-4B-Supreme2.Q4_K_M.gguf 2.5 GB GGUF (Q4_K_M) 4-bit quantized model file
Lacaille-MoT-4B-Supreme2.Q5_K_M.gguf 2.89 GB GGUF (Q5_K_M) 5-bit quantized model file
Lacaille-MoT-4B-Supreme2.Q8_0.gguf 4.28 GB GGUF (Q8_0) 8-bit quantized model file
config.json 31 B JSON Configuration file
.gitattributes 1.95 kB Text Git attributes configuration

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
126
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Lacaille-MoT-4B-Supreme2-GGUF

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(9)
this model

Collection including prithivMLmods/Lacaille-MoT-4B-Supreme2-GGUF