OpenThinker3-7B-GGUF

State-of-the-art open-data 7B reasoning model. This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct on the OpenThoughts3-1.2M dataset. It represents a notable improvement over our previous models, OpenThinker-7B and OpenThinker2-7B, and it outperforms several other strong reasoning 7B models such as DeepSeek-R1-Distill-Qwen-7B and Llama-3.1-Nemotron-Nano-8B-v1, despite being trained only with SFT, without any RL.

Model Files

File Name Size Format Description
OpenThinker3-7B.F32.gguf 30.5 GB F32 Full precision 32-bit floating point
OpenThinker3-7B.F16.gguf 15.2 GB F16 Half precision 16-bit floating point
OpenThinker3-7B.BF16.gguf 15.2 GB BF16 Brain floating point 16-bit

Usage

These GGUF format files are optimized for use with llama.cpp and compatible inference engines. Choose the appropriate precision level based on your hardware capabilities and quality requirements:

  • F32: Highest quality, requires most memory
  • F16/BF16: Good balance of quality and memory efficiency

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
42
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/OpenThinker3-7B-F32-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(6)
this model