--- quantized_by: bartowski pipeline_tag: text-generation license: apache-2.0 base_model: lmstudio-community/Qwen2.5-14B-Instruct-MLX-8bit license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE language: - en tags: - chat - mlx - mlx-my-repo library_name: mlx --- # Fmuaddib/Qwen2.5-14B-Instruct-MLX-8bit-mlx-8Bit The Model [Fmuaddib/Qwen2.5-14B-Instruct-MLX-8bit-mlx-8Bit](https://huggingface.co/Fmuaddib/Qwen2.5-14B-Instruct-MLX-8bit-mlx-8Bit) was converted to MLX format from [lmstudio-community/Qwen2.5-14B-Instruct-MLX-8bit](https://huggingface.co/lmstudio-community/Qwen2.5-14B-Instruct-MLX-8bit) using mlx-lm version **0.22.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Fmuaddib/Qwen2.5-14B-Instruct-MLX-8bit-mlx-8Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```