File size: 757 Bytes
69e024b
 
 
 
 
 
 
 
63c1cdc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e929433
63c1cdc
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
license: apache-2.0
base_model:
- Qwen/QwQ-32B
base_model_relation: quantized
library_name: transformers
tags:
- qwq
- fp8
---
# Model Overview

## Description

FP8 Quantized QwQ-32B.

## Evaluation

The test results in the following table are based on the MMLU benchmark.

In order to speed up the test, we prevent the model from generating too long thought chains, so the score may be different from that with longer thought chain.

In our experiment, **the accuracy of the FP8 quantized version is almost the same as the BF16 version, and it can be used for faster inference.**

| Data Format | MMLU Score |
|:---|:---|
| BF16 Official | 61.2 |
| FP8 Quantized | 61.2 |
| Q8_0 (INT8) | 59.1 |
| AWQ (INT4) | 53.4 |

## Contact

[email protected]