QwQ 32B - llamafile
- Model creator: Qwen
- Original model: Qwen/QwQ-32B
Mozilla packaged the Qwen QwQ 32B model into executable weights that we call llamafiles. This gives you the easiest fastest way to use the model on Linux, MacOS, Windows, FreeBSD, OpenBSD and NetBSD systems you control on both AMD64 and ARM64.
Software Last Updated: 2025-03-31
Llamafile Version: 0.9.2
Quickstart
To get started, you need both the Qwen QwQ 32B weights, and the llamafile software. Both of them are included in a single file, which can be downloaded and run as follows:
wget https://huggingface.co/Mozilla/QwQ-32B-llamafile/resolve/main/Qwen_QwQ-32B-Q6_K.llamafile
chmod +x Qwen_QwQ-32B-Q6_K.llamafile
./Qwen_QwQ-32B-Q6_K.llamafile
The default mode of operation for these llamafiles is our new command line chatbot interface.
Usage
You can use triple quotes to ask questions on multiple lines. You can
pass commands like /stats
and /context
to see runtime status
information. You can change the system prompt by passing the -p "new system prompt"
flag. You can press CTRL-C to interrupt the model.
Finally CTRL-D may be used to exit.
If you prefer to use a web GUI, then a --server
mode is provided, that
will open a tab with a chatbot and completion interface in your browser.
For additional help on how it may be used, pass the --help
flag. The
server also has an OpenAI API compatible completions endpoint that can
be accessed via Python using the openai
pip package.
./Qwen_QwQ-32B-Q6_K.llamafile --server
An advanced CLI mode is provided that's useful for shell scripting. You
can use it by passing the --cli
flag. For additional help on how it
may be used, pass the --help
flag.
./Qwen_QwQ-32B-Q6_K.llamafile --cli -p 'four score and seven' --log-disable
Troubleshooting
Having trouble? See the "Gotchas" section of the README.
On Linux, the way to avoid run-detector errors is to install the APE interpreter.
sudo wget -O /usr/bin/ape https://cosmo.zip/pub/cosmos/bin/ape-$(uname -m).elf
sudo chmod +x /usr/bin/ape
sudo sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
sudo sh -c "echo ':APE-jart:M::jartsr::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
On Windows there's a 4GB limit on executable sizes.
Context Window
This model has a max context window size of 128k tokens. By default, a
context window size of 8192 tokens is used. You can ask llamafile
to use the maximum context size by passing the -c 0
flag. That's big
enough for a small book. If you want to be able to have a conversation
with your book, you can use the -f book.txt
flag.
GPU Acceleration
On GPUs with sufficient RAM, the -ngl 999
flag may be passed to use
the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
driver needs to be installed if you own an NVIDIA GPU. On Windows, if
you have an AMD GPU, you should install the ROCm SDK v6.1 and then pass
the flags --recompile --gpu amd
the first time you run your llamafile.
On NVIDIA GPUs, by default, the prebuilt tinyBLAS library is used to
perform matrix multiplications. This is open source software, but it
doesn't go as fast as closed source cuBLAS. If you have the CUDA SDK
installed on your system, then you can pass the --recompile
flag to
build a GGML CUDA library just for your system that uses cuBLAS. This
ensures you get maximum performance.
For further information, please see the llamafile README.
About llamafile
llamafile is a new format introduced by Mozilla on Nov 20th 2023. It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp binaries that run on the stock installs of six OSes for both ARM64 and AMD64.
QwQ-32B
Introduction
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
This repo contains the QwQ 32B model, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning)
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
- For prompts exceeding 8,192 tokens in length, you must enable YaRN as outlined in this section.
Note: For the best experience, please review the usage guidelines before deploying QwQ models.
You can try our demo or access QwQ models via QwenChat.
For more details, please refer to our blog, GitHub, and Documentation.
Requirements
QwQ is based on Qwen2.5, whose code has been in the latest Hugging face transformers
. We advise you to use the latest version of transformers
.
With transformers<4.37.0
, you will encounter the following error:
KeyError: 'qwen2'
Quickstart
Here provides a code snippet with apply_chat_template
to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Usage Guidelines
To achieve optimal performance, we recommend the following settings:
Enforce Thoughtful Output: Ensure the model starts with "<think>\n" to prevent generating empty thinking content, which can degrade output quality. If you use
apply_chat_template
and setadd_generation_prompt=True
, this is already automatically implemented, but it may cause the response to lack the <think> tag at the beginning. This is normal behavior.Sampling Parameters:
- Use Temperature=0.6, TopP=0.95, MinP=0 instead of Greedy decoding to avoid endless repetitions.
- Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output.
- For supported frameworks, you can adjust the
presence_penalty
parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may result in occasional language mixing and a slight decrease in performance.
No Thinking Content in History: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. This feature is already implemented in
apply_chat_template
.Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
- Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the
answer
field with only the choice letter, e.g.,\"answer\": \"C\"
." in the prompt.
Handle Long Inputs: For inputs exceeding 8,192 tokens, enable YaRN to improve the model's ability to capture long-sequence information effectively.
For supported frameworks, you could add the following to
config.json
to enable YaRN:{ ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } }
For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the
rope_scaling
configuration only when processing long contexts is required.
Evaluation & Performance
Detailed evaluation results are reported in this ๐ blog.
For requirements on GPU memory and the respective throughput, see results here.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwq32b,
title = {QwQ-32B: Embracing the Power of Reinforcement Learning},
url = {https://qwenlm.github.io/blog/qwq-32b/},
author = {Qwen Team},
month = {March},
year = {2025}
}
@article{qwen2.5,
title={Qwen2.5 Technical Report},
author={An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu},
journal={arXiv preprint arXiv:2412.15115},
year={2024}
}
- Downloads last month
- 546