File size: 489 Bytes
5b62674
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
license: apache-2.0
base_model:
- open-r1/OpenR1-Qwen-7B
---

AWQ 4bits version of open-r1/OpenR1-Qwen-7B

```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name = "open-r1/OpenR1-Qwen-7B"
model = AutoAWQForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
model.quantize(tokenizer, quant_config=quant_config)
```