Edit model card

Qwen-125B

Qwen-125B is a Qwen/Qwen-72B self-merge made with MergeKit.

It was inspired by large merges like:

Special thanks to Eric Hartford for both inspiring and evaluating the original model, to Charles Goddard for creating MergeKit, and to Mathieu Labonne for creating the Meta-Llama-3-120B-Instruct model that served as the main inspiration for this merge.

πŸ” Applications

This model is recommended for creative writing tasks. It uses the Qwen chat template with a default context window of 8K (can be extended with rope theta).

The model is generally quite creative and has a good writing style. It may occasionally output typos and show a preference for uppercase text.

⚑ Quantized models

To be quantized.

  • GGUF: [Link to GGUF model]
  • EXL2: [Link to EXL2 model]
  • mlx: [Link to mlx model]

πŸ† Evaluation

This model has yet to be thoroughly evaluated. It is expected to excel in creative writing but may have limitations in other tasks. Use it with caution and don't expect it to outperform state-of-the-art models outside of specific creative use cases.

Once the model is created and tested, this section will be updated with:

  • Links to evaluation threads on social media platforms
  • Examples of the model's performance in creative writing tasks
  • Comparisons with other large language models in various applications
  • Community feedback and use cases

We encourage users to share their experiences and evaluations to help build a comprehensive understanding of the model's capabilities and limitations.

🧩 Configuration

slices:
- sources:
  - layer_range: [0, 20]
    model: Qwen/Qwen-72B
- sources:
  - layer_range: [10, 30]
    model: Qwen/Qwen-72B
- sources:
  - layer_range: [20, 40]
    model: Qwen/Qwen-72B
- sources:
  - layer_range: [30, 50]
    model: Qwen/Qwen-72B
- sources:
  - layer_range: [40, 60]
    model: Qwen/Qwen-72B
- sources:
  - layer_range: [50, 70]
    model: Qwen/Qwen-72B
- sources:
  - layer_range: [60, 80]
    model: Qwen/Qwen-72B
merge_method: passthrough
dtype: bfloat16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "ssmits/Qwen2.5-125B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
10
Safetensors
Model size
125B params
Tensor type
BF16
Β·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for ssmits/Qwen2.5-125B-Instruct

Unable to build the model tree, the base model loops to the model itself. Learn more.