didierlopes's picture
Upload README.md with huggingface_hub
045f3ff verified
metadata
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
  - lora
  - mlx
  - fine-tuned
library_name: mlx

LoRA Adapters for Phi-3-mini-4k-instruct

This repository contains LoRA adapter weights for fine-tuning microsoft/Phi-3-mini-4k-instruct using MLX.

Model Details

  • Base Model: microsoft/Phi-3-mini-4k-instruct
  • Training Framework: MLX
  • Adapter Type: LoRA (Low-Rank Adaptation)
  • Trainable Parameters: 3,145,728 (0.08% of total)
  • Total Model Parameters: 3,824,225,280

LoRA Configuration

  • Rank (r): 16
  • Scale: 20.0
  • Dropout: 0.1
  • Target Modules: self_attn.q_proj, self_attn.k_proj, self_attn.v_proj, self_attn.o_proj
  • Number of Layers: 32 (out of 32 total)

Usage

Installation

pip install mlx-lm

Loading the Adapters

Option 1: Load from HuggingFace Hub

from mlx_lm import load, generate
from mlx_lm.tuner import linear_to_lora_layers
from huggingface_hub import snapshot_download
import json

# Download adapters from HuggingFace
adapter_path = snapshot_download(repo_id="didierlopes/phi-3-mini-4k-instruct-ft-on-my-blog")

# Load base model
model, tokenizer = load("microsoft/Phi-3-mini-4k-instruct")

# Load adapter config
with open(f"{adapter_path}/adapter_config.json", "r") as f:
    adapter_config = json.load(f)

# Freeze base model and apply LoRA layers
model.freeze()
linear_to_lora_layers(
    model, 
    adapter_config["lora_layers"],
    adapter_config["lora_parameters"]
)

# Load the LoRA weights
model.load_weights(f"{adapter_path}/adapters.safetensors", strict=False)

# Generate text
prompt = "<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>\nHello!<|end|>\n<|assistant|>"
response = generate(model, tokenizer, prompt, max_tokens=200)
print(response)

Option 2: Clone and Load Locally

git clone https://huggingface.co/didierlopes/phi-3-mini-4k-instruct-ft-on-my-blog
cd phi-3-mini-4k-instruct-ft-on-my-blog

Then use the same Python code above, replacing adapter_path with your local directory path.

Training Details

These adapters were trained using:

  • Framework: MLX with LoRA fine-tuning
  • Hardware: Apple Silicon
  • Training approach: Parameter-efficient fine-tuning with gradient checkpointing

Files

  • adapters.safetensors: Final adapter weights
  • adapter_config.json: LoRA configuration
  • config.json: Training and model metadata
  • *.safetensors: Training checkpoint files (optional)

License

These adapters are released under the MIT License. The base model may have its own license requirements.