Introduction (ご紹介)
This is a exl2(exllama v2) quantized version of 'QwQ Bakeneko 32B' which is state-of-the-art QwQ Japanese fine-tuned model. これは'QwQ化け猫 32B'をexl2(exllama v2)フォーマットで量子化したものです。
Exl2 Quantization
Exllama2 ver.0.28
average: 8.0bpw lm_head: 8.0bpw
python convert.py \
-i {path}/qwq-bakeneko-32b \
-o {path}/qwq-bakeneko-32b/workingdir/ \
-cf {path}/qwq-bakeneko-32b_8.0bpw/ \
-b 8.0 \
-hb 8
Example of chat (チャット例)
以下の例のようにブロックで囲まれた思考プロセスが日本語で表示される点が従来のQwQ-32Bモデルとの大きな違いです。
'user': 'o1のような長考モデルのLLMが登場していますがLLMが長考すると回答精度が高くなるメカニズムや理由について詳しく解説してください。'
'assistant':
'<think>
まずユーザーからの質問は、「o1のような長考モデルのLLMが登場していますがLLMが長考すると回答精度が高くなるメカニズムや理由について詳しく解説してください」というものです。この質問に対して正確かつ分かりやすく回答する必要があります。
最初に、「長考モデル」という概念について整理する必要があります。「長考」とはモデルが複数ステップで考えることであり、従来の一回限りの出力ではなく段階的な推論を行うことを指していると考えられます。例えばo1のようなモデルでは内部で複数回の思考ステップを行いながら答えを探っていくようです。
次に、「回答精度が高まるメカニズム」について考える必要があります。従来のLLMは一度に情報を処理しようとすることが多くて誤りが出やすいですが、「長考」モデルでは段階的に情報を分解して処理することで誤りが減るのではないかと考えられます。例えばChain-of-Thought(CoT)やTree-of-Thoughts(ToT)などの手法が関係しているかもしれません。
さらに、「なぜ精度が向上するのか」という理由を探ります。段階的な推論により中間ステップでのチェックや誤り訂正ができるためではないかと考えられます。また複数のアプローチを考えることで最適解を見つけやすくなったりするのでしょう。
また技術的な観点からも調べる必要があります。例えばモデル構造上の違いやトレーニング方法など。「長考」モデルではTransformerブロックの繰り返し処理やループ機構があるかもしれません。あるいはメモリ効率的な処理を行っている可能性もあります。
さらに具体例を考えると分かりやすいでしょう。例えば数学問題の場合、「まず式を作る→計算する→答えを見直す」というステップがあると誤りが減りますよね。「長考」モデルも同様に各ステップで検証しながら進むことで正確さが向上するのでしょう。
ただし注意点もあります。「長考」による計算コスト増加や実行時間延長といったトレードオフも考慮する必要があります。ユーザーはその点にも触れたいかもしれませんのでバランスよく説明する必要があります。
最後にまとめると、「長考モデル」は段階的な推論により中間ステップでの誤り訂正ができることや複数アプローチから最適解を選択できるため精度が高いと言えます。ただしリソース消費というデメリットもあるため適切な適用が必要です。
</think>
このあと実際の回答が続く。
'
QwQ Bakeneko 32B (rinna/qwq-bakeneko-32b)
Overview
This model is an instruction-tuned reasoning variant of rinna/qwen2.5-bakeneko-32b, fine-tuned using Chat Vector and Odds Ratio Preference Optimization (ORPO). It adheres to the Qwen/QwQ-32B chat format and is designed to deliever superior performance in Japanese language tasks.
Model Type | Model Name |
---|---|
Japanese Continual Pre-Training Model | Qwen2.5 Bakeneko 32B [HF] |
Instruction-Tuning Model | Qwen2.5 Bakeneko 32B Instruct [HF][AWQ][GGUF][GPTQ int8][GPTQ int4] |
DeepSeek R1 Distill Qwen2.5 Merged Reasoning Model | DeepSeek R1 Distill Qwen2.5 Bakeneko 32B [HF][AWQ][GGUF][GPTQ int8][GPTQ int4] |
QwQ Merged Reasoning Model | QwQ Bakeneko 32B [HF][AWQ][GGUF][GPTQ int8][GPTQ int4] |
Model architecture
A 64-layer, 5120-hidden-size transformer-based language model. For a comprehensive understanding of the architecture, please refer to the Qwen2.5 Technical Report.
Training
This model was developed through a multi-stage training process:
Model merging. The base model, rinna/qwen2.5-bakeneko-32b, was augmented with instruction-following capabilities through a Chat Vector addition process. The Chat Vector was derived by subtracting the parameter vectors of Qwen/QwQ-32B from Qwen/Qwen2.5-32B, as follows.
rinna/qwen2.5-bakeneko-32b + 0.8 * (Qwen/QwQ-32B - Qwen/Qwen2.5-32B)
During this process, the embedding layer was omitted when performing the subtraction and addition of parameter vectors.
Distillation and ORPO. The merged model was further refined using ORPO, trained on 1.3k carefully curated data samples generated by DeepSeek-R1.
Contributors
Release date
March 13, 2025
Benchmarking
Model | Japanese LM Evaluation Harness | Japanese MT-Bench (first turn) | Japanese MT-Bench (multi turn) |
---|---|---|---|
Qwen/Qwen2.5-32B | 79.46 | - | - |
rinna/qwen2.5-bakeneko-32b | 79.18 | - | - |
Qwen/Qwen2.5-32B-Instruct | 78.29 | 8.13 | 7.54 |
rinna/qwen2.5-bakeneko-32b-instruct | 79.62 | 8.17 | 7.66 |
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | 73.51 | 7.39 | 6.88 |
rinna/deepseek-r1-distill-qwen2.5-bakeneko-32b | 77.43 | 8.58 | 8.19 |
Qwen/QwQ-32B | 76.12 | 8.58 | 8.25 |
rinna/qwq-bakeneko-32b | 78.31 | 8.81 | 8.52 |
For detailed benchmarking results, please refer to rinna's LM benchmark page.
How to use the model
Setup
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "rinna/qwq-bakeneko-32b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{"role": "user", "content": "微分に関する簡単な文章問題を作成し、その問題を解いてください。"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=4096,
do_sample=True,
temperature=0.6,
top_k=40,
top_p=0.95,
)
response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
response = "<think>\n" + response
print(response)
Recommendations
For optimal performance, it is recommended to review Usage Guidelines before deploying this model.
Tokenization
This model inherits the original Qwen/QwQ-32B tokenizer.
How to cite
@misc{rinna/qwq-bakeneko-32b
title = {rinna/qwq-bakeneko-32b},
author = {Chen, Xinqi and Wakatsuki, Toshiaki and Sawada, Kei},
url = {https://huggingface.co/rinna/qwq-bakeneko-32b}
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
pages = {13898--13905},
url = {https://aclanthology.org/2024.lrec-main.1213},
note = {\url{https://arxiv.org/abs/2404.01657}}
}
References
@article{qwen2.5,
title = {Qwen2.5 Technical Report},
author = {An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu},
journal = {arXiv preprint arXiv:2412.15115},
year = {2024}
}
@misc{qwq32b,
title = {QwQ-32B: Embracing the Power of Reinforcement Learning},
url = {https://qwenlm.github.io/blog/qwq-32b/},
author = {Qwen Team},
month = {March},
year = {2025}
}
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title = {DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author = {DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year = {2025},
eprint = {2501.12948},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2501.12948},
}
@article{huang2023chat,
title = {Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages},
author = {Huang, Shih-Cheng and Li, Pin-Zu and Hsu, Yu-Chi and Chen, Kuang-Ming and Lin, Yu Tung and Hsiao, Shih-Kai and Tzong-Han Tsai, Richard and Lee, Hung-yi},
year = {2023},
url = {https://arxiv.org/abs/2310.04799}
}
@inproceedings{hong2024orpo,
title = {ORPO: Monolithic Preference Optimization without Reference Model},
author = {Hong, Jiwoo and Lee, Noah and Thorne, James},
booktitle = {Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing},
pages = {11170--11189},
year = {2024}
}
License
- Downloads last month
- 0