OpenR1-Distill-7B-F32-GGUF

OpenR1-Distill-7B-F32-GGUF is a quantized version of OpenR1-Distill-7B, which is a post-trained model based on Qwen/Qwen2.5-Math-7B. It was further trained on Mixture-of-Thoughts, a curated dataset of 350k verified reasoning traces distilled from DeepSeek-R1. The dataset covers tasks in mathematics, coding, and science, and is designed to teach language models to reason step-by-step.

Model File

File Name Size Format Notes
OpenR1-Distill-7B.BF16.gguf 15.2 GB GGUF BF16 precision model
OpenR1-Distill-7B.F16.gguf 15.2 GB GGUF FP16 precision model
OpenR1-Distill-7B.F32.gguf 30.5 GB GGUF FP32 precision model
OpenR1-Distill-7B.Q2_K.gguf 3.02 GB GGUF 2-bit quantized (Q2_K) model
OpenR1-Distill-7B.Q4_K_M.gguf 4.68 GB GGUF 4-bit quantized (Q4_K_M) model
.gitattributes 1.84 kB Text Git LFS tracking config
config.json 31 B JSON Model configuration file
README.md 213 B Markdown This readme file

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
156
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

4-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/OpenR1-Distill-7B-F32-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(4)
this model

Dataset used to train prithivMLmods/OpenR1-Distill-7B-F32-GGUF