File size: 1,181 Bytes
3a54ee0 f227bac 3a54ee0 f227bac 3a54ee0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
license: gemma
library_name: mlx
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-12b-it
tags:
- mlx
---
# mlx-community/gemma3-12b-it-4bit-DWQ
This model [mlx-community/gemma3-12b-it-4bit-DWQ](https://huggingface.co/mlx-community/gemma-3-12b-it-4bit-DWQ) was
converted to MLX format from [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/gemma-3-12b-it-4bit-DWQ")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|