---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
- mlx
- mlx-my-repo
base_model: unsloth/gpt-oss-120b
---
# mrtoots/unsloth-gpt-oss-120b-mlx-8Bit
The Model [mrtoots/unsloth-gpt-oss-120b-mlx-8Bit](https://huggingface.co/mrtoots/unsloth-gpt-oss-120b-mlx-8Bit) was converted to MLX format from [unsloth/gpt-oss-120b](https://huggingface.co/unsloth/gpt-oss-120b) using mlx-lm version **0.26.4**.
## Toots' Note:
This model was converted and quantized utilizing unsloth's version of gpt-oss-120b.
Please follow and support [unsloth's work](https://huggingface.co/unsloth) if you like it!
🦛 If you want a free consulting session, [fill out this form](https://forms.gle/xM9gw1urhypC4bWS6) to get in touch! 🤗
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mrtoots/gpt-oss-120b-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```