DeepSeek-R1-0528-GPTQ-4b-128g-experts

Model Overview

This model was obtained by quantizing the weights of deepseek-ai/DeepSeek-R1-0528 to INT4 data type. This optimization reduces the number of bits per parameter from 8 to 4, reducing the disk size and GPU memory requirements by approximately 50%.

Only non-shared experts within transformer blocks are compressed. Weights are quantized using a symmetric per-group scheme, with group size 128. The GPTQ algorithm is applied for quantization.

Model checkpoint is saved in compressed_tensors format.

Evaluation

This model was evaluated on reasoning tasks (AIME-24, GPQA-Diamond, MATH-500).

Model outputs were generated with the vLLM engine.

For reasoning tasks we estimate pass@1 based on 10 runs with different seeds and temperature=0.6, top_p=0.95 and max_new_tokens=65536.

Recovery (%) deepseek/DeepSeek-R1-0528 ISTA-DASLab/DeepSeek-R1-0528-GPTQ-4b-128g-experts
(this model)
AIME 2024
pass@1
98.50 88.66 87.33
MATH-500
pass@1
99.88 97.52 97.40
GPQA Diamond
pass@1
101.21 79.65 80.61
Reasoning
Average Score
99.82 88.61 88.45

Contributors

Denis Kuznedelev (Yandex), Eldar Kurtić (Red Hat AI & ISTA), and Dan Alistarh (Red Hat AI & ISTA).

Downloads last month
50
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support