--- language: - en pipeline_tag: text-generation tags: - chat - qwen - qwen2 - finetune - chatml - OpenHermes-2.5 - HelpSteer2 - Orca - SlimOrca library_name: transformers inference: false model_creator: MaziyarPanahi quantized_by: MaziyarPanahi base_model: Qwen/Qwen2-7B model_name: calme-2.6-qwen2-7b datasets: - nvidia/HelpSteer2 - teknium/OpenHermes-2.5 - microsoft/orca-math-word-problems-200k - Open-Orca/SlimOrca license: apache-2.0 --- Qwen2 fine-tune # MaziyarPanahi/calme-2.6-qwen2-7b This is a fine-tuned version of the `Qwen/Qwen2-7B` model. It aims to improve the base model across all benchmarks. # ⚡ Quantized GGUF All GGUF models are available here: [MaziyarPanahi/calme-2.6-qwen2-7b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.6-qwen2-7b-GGUF) # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) coming soon! # Prompt Template This model uses `ChatML` prompt template: ``` <|im_start|>system {System} <|im_end|> <|im_start|>user {User} <|im_end|> <|im_start|>assistant {Assistant} ```` # How to use ```python # Use a pipeline as a high-level helper from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.6-qwen2-7b") pipe(messages) # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.6-qwen2-7b") model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.6-qwen2-7b") ```