--- language: - de - bg - cs - da - el - en - es - et - fi - fr - ga - hr - hu - it - lt - lv - mt - nl - pl - pt - ro - sl - sv - sk metrics: - accuracy - bleu pipeline_tag: text-generation library_name: mlx base_model: openGPT-X/Teuken-7B-instruct-research-v0.4 license: other tags: - mlx --- # mlx-community/Teuken-7B-instruct-research-v0.4-4bit This model [mlx-community/Teuken-7B-instruct-research-v0.4-4bit](https://huggingface.co/mlx-community/Teuken-7B-instruct-research-v0.4-4bit) was converted to MLX format from [openGPT-X/Teuken-7B-instruct-research-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) using mlx-lm version **0.25.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Teuken-7B-instruct-research-v0.4-4bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```