Charity Purpose Rater - Llama 3.2 1B Instruct - MLX LoRA

This repository hosts a LoRA adapter that fine tunes meta-llama/Llama-3.2-1B-Instruct to rate charity purpose or mission statements on five scales: Specificity, Clarity, Impact, Inclusivity, and Attainable Goals.
The adapter is trained with MLX on Apple Silicon using the mlx-community/Llama-3.2-1B-Instruct-4bit base for low memory.

Note: This repo contains LoRA weights only. You will need the base model from Meta under the Llama 3.2 community license. See the license section below.

Intended use

  • Score short purpose or mission statements for charities and similar social purpose orgs.
  • Use in decision support or research workflows.
  • Not intended for real world compliance or automated decision making without human review.

How to use

MLX CLI

# Inference with the adapter
mlx_lm generate   --model mlx-community/Llama-3.2-1B-Instruct-4bit   --adapter-path /path/to/adapter-folder   --temp 0.2   --max-tokens 120   --prompt 'You are a strict rater of charity purpose statements. Use only the 1 to 5 scale.

Rate the charity purpose statement on five 1 to 5 scales. Return only a compact JSON object with these integer keys: Specificity, Clarity, Impact, Inclusivity, "Attainable Goals". Do not include explanations.

Charity: Example Org
Statement: "We help young people into good jobs through mentoring and accredited training."'```

MLX Python

from mlx_lm import load, generate

MODEL = "mlx-community/Llama-3.2-1B-Instruct-4bit"
ADAPTER = "/path/to/adapter-folder"  # folder that contains adapters.safetensors

system = "You are a strict rater of charity purpose statements. Use only the 1 to 5 scale."
user = (
    'Rate the charity purpose statement on five 1 to 5 scales. '
    'Return only a compact JSON object with these integer keys: '
    'Specificity, Clarity, Impact, Inclusivity, "Attainable Goals". '
    'Do not include explanations.\n\n'
    'Charity: Example Org\n'
    'Statement: "We help young people into good jobs through mentoring and accredited training."'
)
prompt = f"{system}\n\n{user}"

model, tokenizer = load(MODEL, adapter_path=ADAPTER)
out = generate(model, tokenizer, prompt, max_tokens=120, temp=0.2)
print(out)

Prompt template used during training

System

You are a strict rater of charity purpose statements. Use only the 1 to 5 scale.

User

Rate the charity purpose statement on five 1 to 5 scales. Return only a compact JSON object with these integer keys: Specificity, Clarity, Impact, Inclusivity, "Attainable Goals". Do not include explanations.

Charity: {name}
Statement: "{text}"

Assistant target

{"Specificity": <int>, "Clarity": <int>, "Impact": <int>, "Inclusivity": <int>, "Attainable Goals": <int>}

Keep temperature low. 0.2 is a good default.

Training details

  • Base model: meta-llama/Llama-3.2-1B-Instruct
  • Format: Chat JSONL with messages entries
  • Optimizer: default MLX LoRA settings
  • Fine tune type: LoRA
  • Quantization: 4 bit base (mlx-community build)
  • Loss masking: prompts masked with --mask-prompt so loss applies to assistant tokens only

Hyperparameters

  • iters: 2200
  • batch size: 4
  • learning rate: 2e-5
  • max seq length: 512
  • grad checkpoint: on
  • steps per eval: 400
  • val batches: 5
  • save every: 200

Hardware

  • Apple Silicon arm64
  • Peak Metal memory reported near 2.6 GB with the settings above

Data

  • Source file: Charity_Purpose_Mission_with_Scores.csv
  • 5 labels per example: Specificity, Clarity, Impact, Inclusivity, Attainable Goals
  • Train and validation split: 90 to 10
  • Approx counts: 8,826 train and 981 validation
  • Data is not redistributed in this repo

Evaluation

A simple validation check reports mean absolute error per label. You can reproduce quickly by sampling valid.jsonl and generating with temperature 0.2, then parsing the JSON.

Suggested metrics

  • MAE per label
  • Exact integer match rate per label
  • JSON validity rate

Limitations

  • The model learns from short statements and may not generalize to long narratives.
  • Scores are subjective and depend on the rubric implied by the data.
  • The adapter is not calibrated for downstream decisions. Human review is required.
  • English only.

Safety and bias

  • Statements about social impact and inclusion can reflect biases present in the data.
  • Use with caution in any sensitive or high stakes decision process.
  • Always keep a human in the loop.

License

  • Base model: Meta Llama 3.2 community license terms apply. You must accept the license on the Meta model page to download the base.
  • This repo contains LoRA weights derived from that base. Redistribution of the base weights is not included here.
  • Dataset is proprietary to the dataset owner. Not included in this repo.

Acknowledgements

  • Meta for Llama 3.2
  • Apple MLX team and mlx-community for the MLX model builds

Contact

  • Maintainer: www.trsutimpact.com
  • Questions or issues: open a Discussion on the Hugging Face repo
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Miemczyk/CharityPurposeAnalyser

Adapter
(346)
this model