Llama.cpp hybrid layer quantization of GLM-Z1-9B-0414 by THUDM

Original model: https://huggingface.co/THUDM/GLM-Z1-9B-0414

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant achieves a ~7.3G gguf with the same perplexity and signficantly better performance on a set of test eval promps compared a ~8.3G Q6_K GGUF. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:

   LAYER_TYPES='[
   [0 ,"Q6_K"  ],[1 ,"Q5_K_M"],[2 ,"Q4_K_M"],[3 ,"Q4_K_M"],[4 ,"Q4_K_M"],[5 ,"Q4_K_M"],[6 ,"Q4_K_M"],[7 ,"Q4_K_M"],
   [8 ,"Q5_K_M"],[9 ,"Q5_K_S"],[10,"Q5_K_M"],[11,"Q5_K_S"],[12,"Q5_K_M"],[13,"Q5_K_S"],[14,"Q5_K_M"],[15,"Q5_K_S"],
   [16,"Q5_K_M"],[17,"Q5_K_M"],[18,"Q5_K_M"],[19,"Q5_K_M"],[20,"Q6_K"  ],[21,"Q5_K_M"],[22,"Q6_K"  ],[23,"Q5_K_M"],
   [24,"Q6_K"  ],[25,"Q5_K_M"],[26,"Q6_K"  ],[27,"Q6_K"  ],[28,"Q6_K"  ],[29,"Q8_0"  ],[30,"Q8_0"  ],[31,"Q8_0"  ]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K"

Comparison:

Quant size PPL Comment
Q6_K 8.3e9 14.7 Q6_K with default embedding and output, unstable with greedy sampling, poor performance on eval prompts
Q6_K_H 7.3e9 14.8 Hybrid quant with Q6_K embedding Q6_K output, stable with greedy sampling, excellent performance on eval prompts

Usage:

This is a RL trained thinking model. The layer quants for this model were optimized for 100% success on a set of test/eval prompts. After achieving that goal it showed very strong performance on problems outside the test/eval prompt set using greedy sampling and it does not exhibit excess overthinking when solving. A straightforward Q6_K quant was found to be both unstable with greedy sampling (never stops generating on some problems) and also was unable to solve several test/eval problems.

This is one of the strongest general reasoning models I have experienced to date as of 7/21/2025 independent of size, compared against both QwQ, R1 distills of Qwen 2.5 models, and Qwen 3. However testing with some code problems show it is extremely weak on code generation problems.

Benchmarks:

A set of math benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
GLM-Z1-9B-0414.Q6_K_H.gguf Q6_K_H 7.3e9 B 1B smaller than Q6_K with much better performance

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
4
GGUF
Model size
9.4B params
Architecture
glm4
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for steampunque/GLM-Z1-9B-0414-Hybrid-GGUF

Quantized
(28)
this model