metadata
license: gemma
library_name: mlx
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-1b-it
tags:
- mlx
model-index:
- name: gemma-3-1b-it-DQ
results:
- task:
type: text-generation
dataset:
type: PIQA
name: PIQA
metrics:
- name: pass@1
type: pass@1
value: 0.75
verified: false
- task:
type: text-generation
dataset:
type: winogrande
name: winogrande
metrics:
- name: pass@1
type: pass@1
value: 0.6
verified: false
- task:
type: text-generation
dataset:
type: boolq
name: boolq
metrics:
- name: pass@1
type: pass@1
value: 0.73
verified: false
- task:
type: text-generation
dataset:
type: arc-c
name: arc-c
metrics:
- name: pass@1
type: pass@1
value: 0.35
verified: false
mlx-community/gemma-3-1b-it-DQ
This model mlx-community/gemma-3-1b-it-DQ was converted to MLX format from google/gemma-3-1b-it using mlx-lm version 0.25.2.
2x faster and 2.4x less memory footprint than the dequantized model
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/gemma-3-1b-it-DQ")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)