YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
---
license: mit
base_model: mlx-community/gemma-3-1b-it-bf16
---
# LoRA Adapter for `mlx-community/gemma-3-1b-it-bf16`
This repository contains a LoRA adapter trained to play Wordle.
## Usage
This adapter should be used with the base model `mlx-community/gemma-3-1b-it-bf16`.
You will need to define the `LoRALinear` class provided in the original MLX examples.
```python
import mlx.core as mx
from mlx_lm import load
from huggingface_hub import hf_hub_download
# You must have the LoRALinear class defined in your script.
# (Insert the LoRALinear class definition here)
# Load the base model
model, tokenizer = load("mlx-community/gemma-3-1b-it-bf16")
# Apply the LoRA layers to the model
# This must be done *before* loading the adapter weights
model.freeze()
for l in model.model.layers[-16:]:
l.self_attn.q_proj = LoRALinear.from_linear(l.self_attn.q_proj, rank=8)
l.self_attn.v_proj = LoRALinear.from_linear(l.self_attn.v_proj, rank=8)
if hasattr(l, "block_sparse_moe"):
l.block_sparse_moe.gate = LoRALinear.from_linear(l.block_sparse_moe.gate, rank=8)
# Download and load the adapter weights
adapter_path = hf_hub_download(repo_id="charbull/ppo-gemma-lora-only-v1", filename="wordle_gemma3_grpo_adapters.npz")
weights = mx.load(adapter_path)
model.update(weights) # Note: mlx-lm > 0.1.1 has model.update_from_safetensors
print("Adapter loaded successfully.")
# The model is now ready for generation or fusion.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support