|
--- |
|
license: other |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
inference: false |
|
tags: |
|
- transformers |
|
- gguf |
|
- imatrix |
|
- QwQ-32B |
|
--- |
|
Quantizations of https://huggingface.co/Qwen/QwQ-32B |
|
|
|
**Note**: you will need llama.cpp [b4875](https://github.com/ggml-org/llama.cpp/releases/tag/b4875) or later to run the model. |
|
|
|
### Open source inference clients/UIs |
|
* [llama.cpp](https://github.com/ggerganov/llama.cpp) |
|
* [KoboldCPP](https://github.com/LostRuins/koboldcpp) |
|
* [ollama](https://github.com/ollama/ollama) |
|
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui) |
|
* [jan](https://github.com/janhq/jan) |
|
* [GPT4All](https://github.com/nomic-ai/gpt4all) |
|
|
|
### Closed source inference clients/UIs |
|
* [LM Studio](https://lmstudio.ai/) |
|
* [Backyard AI](https://backyard.ai/) |
|
* More will be added... |
|
--- |
|
|
|
# From original readme |
|
|
|
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini. |
|
|
|
|
|
**This repo contains the QwQ 32B model**, which has the following features: |
|
- Type: Causal Language Models |
|
- Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning) |
|
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias |
|
- Number of Parameters: 32.5B |
|
- Number of Paramaters (Non-Embedding): 31.0B |
|
- Number of Layers: 64 |
|
- Number of Attention Heads (GQA): 40 for Q and 8 for KV |
|
- Context Length: Full 131,072 tokens |
|
- For prompts exceeding 8,192 tokens in length, you must enable YaRN as outlined in [this section](#usage-guidelines). |
|
|
|
**Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models. |
|
|
|
You can try our [demo](https://huggingface.co/spaces/Qwen/QwQ-32B-Demo) or access QwQ models via [QwenChat](https://chat.qwen.ai). |
|
|
|
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). |
|
|
|
## Requirements |
|
|
|
QwQ is based on Qwen2.5, whose code has been in the latest Hugging face `transformers`. We advise you to use the latest version of `transformers`. |
|
|
|
With `transformers<4.37.0`, you will encounter the following error: |
|
``` |
|
KeyError: 'qwen2' |
|
``` |
|
|
|
## Quickstart |
|
|
|
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_name = "Qwen/QwQ-32B" |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype="auto", |
|
device_map="auto" |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
prompt = "How many r's are in the word \"strawberry\"" |
|
messages = [ |
|
{"role": "user", "content": prompt} |
|
] |
|
text = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
|
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
|
|
|
generated_ids = model.generate( |
|
**model_inputs, |
|
max_new_tokens=32768 |
|
) |
|
generated_ids = [ |
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
|
] |
|
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
print(response) |
|
``` |
|
|
|
### Usage Guidelines |
|
|
|
To achieve optimal performance, we recommend the following settings: |
|
|
|
1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior. |
|
|
|
2. **Sampling Parameters**: |
|
- Use Temperature=0.6, TopP=0.95, MinP=0 instead of Greedy decoding to avoid endless repetitions. |
|
- Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output. |
|
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may result in occasional language mixing and a slight decrease in performance. |
|
|
|
3. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. This feature is already implemented in `apply_chat_template`. |
|
|
|
4. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. |
|
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. |
|
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt. |
|
|
|
5. **Handle Long Inputs**: For inputs exceeding 8,192 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively. |
|
|
|
For supported frameworks, you could add the following to `config.json` to enable YaRN: |
|
```json |
|
{ |
|
..., |
|
"rope_scaling": { |
|
"factor": 4.0, |
|
"original_max_position_embeddings": 32768, |
|
"type": "yarn" |
|
} |
|
} |
|
``` |
|
|
|
For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM. |
|
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. |
|
We advise adding the `rope_scaling` configuration only when processing long contexts is required. |