Capricornus-MoT-1.7B-Supreme1-GGUF

Capricornus-MoT-1.7B-Supreme1 is a high-precision, multi-domain expert model fine-tuned from Qwen3-1.7B, built for code generation, mathematical reasoning, scientific analysis, and open technical inference. Trained on the Mixture of Thoughts (MoT) dataset with combined expert clusters in code, math, and science, and enhanced with an Open Code Reasoning dataset, it delivers powerful symbolic and structured outputs in a wide range of STEM and reasoning domains.

Model File

File Name Size Format Description
Capricornus-MoT-1.7B-Supreme1.BF16.gguf 3.45 GB GGUF (BF16) BFloat16 precision model file
Capricornus-MoT-1.7B-Supreme1.F16.gguf 3.45 GB GGUF (F16) Float16 precision model file
Capricornus-MoT-1.7B-Supreme1.F32.gguf 6.89 GB GGUF (F32) Float32 precision model file
Capricornus-MoT-1.7B-Supreme1.Q4_K_M.gguf 1.11 GB GGUF (Q4_K_M) 4-bit quantized model file
Capricornus-MoT-1.7B-Supreme1.Q5_K_M.gguf 1.26 GB GGUF (Q5_K_M) 5-bit quantized model file
Capricornus-MoT-1.7B-Supreme1.Q8_0.gguf 1.83 GB GGUF (Q8_0) 8-bit quantized model file
config.json 31 B JSON Configuration file
.gitattributes 1.98 kB Text Git attributes configuration

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
105
GGUF
Model size
1.72B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Capricornus-MoT-1.7B-Supreme1-GGUF

Collection including prithivMLmods/Capricornus-MoT-1.7B-Supreme1-GGUF