mitkox's picture
ce37addd791c7b4c596b8773cb9a13f71e916ca5453f71bb38515bc926092eee
08b2d4d verified
|
raw
history blame
1.1 kB
metadata
language:
  - en
license: apache-2.0
library_name: transformers
tags:
  - code
  - qwen-coder
  - finetune
  - mlx
base_model: WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
pipeline_tag: text-generation

mitkox/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-mlx

The Model mitkox/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-mlx was converted to MLX format from WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B using mlx-lm version 0.18.2.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mitkox/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-mlx")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)