duyntnet commited on
Commit
578e6d1
·
verified ·
1 Parent(s): 8ebf2f0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - QwQ-32B
12
+ ---
13
+ Quantizations of https://huggingface.co/Qwen/QwQ-32B
14
+
15
+ **Note**: you will need llama.cpp [b4875](https://github.com/ggml-org/llama.cpp/releases/tag/b4875) or later to run the model.
16
+
17
+ ### Open source inference clients/UIs
18
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp)
19
+ * [KoboldCPP](https://github.com/LostRuins/koboldcpp)
20
+ * [ollama](https://github.com/ollama/ollama)
21
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
22
+ * [jan](https://github.com/janhq/jan)
23
+ * [GPT4All](https://github.com/nomic-ai/gpt4all)
24
+
25
+ ### Closed source inference clients/UIs
26
+ * [LM Studio](https://lmstudio.ai/)
27
+ * [Backyard AI](https://backyard.ai/)
28
+ * More will be added...
29
+ ---
30
+
31
+ # From original readme
32
+
33
+ QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
34
+
35
+
36
+ **This repo contains the QwQ 32B model**, which has the following features:
37
+ - Type: Causal Language Models
38
+ - Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning)
39
+ - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
40
+ - Number of Parameters: 32.5B
41
+ - Number of Paramaters (Non-Embedding): 31.0B
42
+ - Number of Layers: 64
43
+ - Number of Attention Heads (GQA): 40 for Q and 8 for KV
44
+ - Context Length: Full 131,072 tokens
45
+ - For prompts exceeding 8,192 tokens in length, you must enable YaRN as outlined in [this section](#usage-guidelines).
46
+
47
+ **Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models.
48
+
49
+ You can try our [demo](https://huggingface.co/spaces/Qwen/QwQ-32B-Demo) or access QwQ models via [QwenChat](https://chat.qwen.ai).
50
+
51
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
52
+
53
+ ## Requirements
54
+
55
+ QwQ is based on Qwen2.5, whose code has been in the latest Hugging face `transformers`. We advise you to use the latest version of `transformers`.
56
+
57
+ With `transformers<4.37.0`, you will encounter the following error:
58
+ ```
59
+ KeyError: 'qwen2'
60
+ ```
61
+
62
+ ## Quickstart
63
+
64
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
65
+
66
+ ```python
67
+ from transformers import AutoModelForCausalLM, AutoTokenizer
68
+
69
+ model_name = "Qwen/QwQ-32B"
70
+
71
+ model = AutoModelForCausalLM.from_pretrained(
72
+ model_name,
73
+ torch_dtype="auto",
74
+ device_map="auto"
75
+ )
76
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
77
+
78
+ prompt = "How many r's are in the word \"strawberry\""
79
+ messages = [
80
+ {"role": "user", "content": prompt}
81
+ ]
82
+ text = tokenizer.apply_chat_template(
83
+ messages,
84
+ tokenize=False,
85
+ add_generation_prompt=True
86
+ )
87
+
88
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
89
+
90
+ generated_ids = model.generate(
91
+ **model_inputs,
92
+ max_new_tokens=32768
93
+ )
94
+ generated_ids = [
95
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
96
+ ]
97
+
98
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
99
+ print(response)
100
+ ```
101
+
102
+ ### Usage Guidelines
103
+
104
+ To achieve optimal performance, we recommend the following settings:
105
+
106
+ 1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior.
107
+
108
+ 2. **Sampling Parameters**:
109
+ - Use Temperature=0.6, TopP=0.95, MinP=0 instead of Greedy decoding to avoid endless repetitions.
110
+ - Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output.
111
+ - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may result in occasional language mixing and a slight decrease in performance.
112
+
113
+ 3. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. This feature is already implemented in `apply_chat_template`.
114
+
115
+ 4. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
116
+ - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
117
+ - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt.
118
+
119
+ 5. **Handle Long Inputs**: For inputs exceeding 8,192 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively.
120
+
121
+ For supported frameworks, you could add the following to `config.json` to enable YaRN:
122
+ ```json
123
+ {
124
+ ...,
125
+ "rope_scaling": {
126
+ "factor": 4.0,
127
+ "original_max_position_embeddings": 32768,
128
+ "type": "yarn"
129
+ }
130
+ }
131
+ ```
132
+
133
+ For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
134
+ Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
135
+ We advise adding the `rope_scaling` configuration only when processing long contexts is required.