MoA-Metric-LM-150M (Convergent)
A compact-but-capable β150M parameter causal LM that replaces dot-product attention with metric-native attention and augments sequence geometry with BlackHoleRoPE (a learnable, stable RoPE variant). Designed to train and run on modest hardware (CPU-first friendly) while staying fully compatible with π€
Why this model?
β’ Distance scores, not dot products. Heads score with L2, cosine, or diag-Mahalanobis distances. This gives direct control over geometry, often stabilizes training, and can be more sample-efficient.
β’ BlackHoleRoPE positional encoding.
β’ Q/K: pure unit-modulus rotation (unitary β numerically stable).
β’ V: bounded-energy gating (Penrose-inspired), optionally modulated by a discrepancy signal.
β’ Parameters synthesized from a tiny Fourier basis β extrapolable and cache-friendly, with low memory.
β’ MoA (Mixture-of-Architectures) block. Token-wise router softly blends four heads per block:
1. LocalConv (depthwise token-local conv)
2. MetricMHAttention (multi-head metric attention)
3. ChannelMix (MLP)
4. MetricMQA (multi-query, shared K/V)
β’ Triangle-Inequality (TI) regularizer. Keeps metric heads honest by penalizing violations over random triples.
β’ Runs on CPUs. Implemented to behave well in FP32 on AVX2/AVX-512 machines.
βΈ»
Model at a glance
Property Value Parameters ~150 M (exact count depends on vocab; see config.json) Layers 12β24 depending on variant (MoA blocks) Hidden size β₯ 1024 in the 400 M variant (head dim divisible by #heads) Attention Metric-native (L2 / cosine / diag-Mahalanobis), plus MetricMQA Positional BlackHoleRoPE per-head (rope_global for MH-Attn, rope_mqa for MQA) Router Token-wise soft mixture across the four heads (+ optional bias gate) FFN HyperFFN = SwiGLU MLP + SepConv1d + Low-Rank path (router-mixed) Context Trained primarily at 512β1024 tokens; config allows up to 2048 Precision Training FP32 (CPU-friendly); inference FP32/BF16/FP16 supported License Apache-2.0
Note on context: training emphasized 512β1024; BlackHoleRoPE is extrapolable, but throughput and quality beyond training lengths depend on your hardware and data.
βΈ»
Intended use & limitations
Intended: compact assistants, long-context reading/QA, math-style step reasoning, research on distance-based attention and geometric inductive biases.
Not intended: safety-critical use, heavy factual QA at web scale, or domains requiring guaranteed accuracy. Evaluate carefully before deployment.
βΈ»
Datasets
- WeMake/Intelligent-Content-Understanding ~256k Tokens, [8, 256] [4, 512]
- QingyiSi/Alpaca-CoT ~128K Tokens [2, 1024], [1, 2048] [4, 512]
- HuggingFaceH4/MATH-500 ~256k Tokens, [8, 256] [4, 512]
- zai-org/LongWriter-6k ~128k Tokens [2, 1024] [1, 2048]
- SFT: prithivMLmods/Deepthink-Reasoning [8, 256] ~ Final Loss 0.3200/ Total Tokens 128512.0
Training used modest token budgets (hundreds of thousands). Reported training logs showed healthy loss descent on both 512 and 1024 sequence lengths on CPU runs. Exact metrics will vary with tokenizer, preprocessing, and optimizer settings.
βΈ»
Installation
pip install transformers accelerate sentencepiece
βΈ»
Quick start
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
repo = "reaperdoesntknow/MoA-150M"
tok = AutoTokenizer.from_pretrained(repo)
model = AutoModelForCausalLM.from_pretrained(
repo, torch_dtype=torch.float32, device_map="cpu"
).eval()
prompt = "Read and answer: If 3x + 2 = 17, what is x?\nReasoning:"
inputs = tok(prompt, return_tensors="pt")
with torch.no_grad():
out = model.generate(
**inputs,
max_length=256,
do_sample=True,
top_p=0.9,
temperature=0.8,
pad_token_id=tok.eos_token_id,
)
print(tok.decode(out[0], skip_special_tokens=True))
Pipeline usage
from transformers import pipeline
repo = "reaperdoesntknow/MoA-400M"
pipe = pipeline("text-generation", model=repo, device_map="cpu")
print(
pipe(
"Question: Who wrote 'The Selfish Gene'?\nAnswer:",
max_length=128,
do_sample=False,
)[0]["generated_text"]
)
βΈ»
Architecture details
Metric attention (MH) β’ Scores: β’ L2: -||q-k||Β² / sqrt(d) β’ Cosine: normalized dot β scaled β’ diag-Mahalanobis: per-head diagonal scale on dimensions β’ Stability: logits scaled by a learnable Ξ±; optional radius-based pruning mask for efficiency. β’ Value path: post-attention Up/Down projector (gated) for expressive value mixing.
Metric MQA (shared K/V) β’ K and V are shared (single projection) and broadcast; queries remain multi-head. Useful for throughput and memory.
BlackHoleRoPE
β’ Q/K rotation only (unit modulus) β preserves norms; avoids value blow-ups.
β’ V receives bounded-energy amplification (energy_min..energy_max) with optional discrepancy modulation.
β’ Parameters synthesized from a small Fourier basis; reduces cache size and improves length generalization.
Routing & gates β’ TokenRouter: per-token weights over {LocalConv, MetricMH, ChannelMix, MetricMQA}. β’ Feature gates: per-head multiplicative scales in (0, 2) around 1.0. β’ Optional router bias adds signed offsets before softmax.
Triangle-Inequality regularizer β’ Lightweight penalty on random triples to discourage degenerate metric geometry.
βΈ»
Training recipe (reference) β’ Device: CPU (AVX2/AVX-512 recommended). β’ Precision: FP32. β’ Optimizer: AdamW or Adam (Ξ²β=0.9, Ξ²β=0.95β0.999 work); cosine LR or linear warmup. β’ Batch/seq: [batch, seq] = [2β4, 512β1024]. β’ Regularization: modest dropout in attention/value paths; optional TI penalty.
If you see NaN/Inf during sampling, ensure masks are additive 0/-inf, clamp logits when rows are fully masked, and set a pad_token_id in .generate().
βΈ»
Evaluation notes
The model targets behavioral quality per FLOP rather than leaderboard chasing. On held-out long-context QA and small math checks, it shows: β’ Robust token-to-token coherence at 512β1024. β’ Stable generation on CPU with FP32. β’ Competitive loss trends versus dot-product baselines trained under the same compute.
Please share issues/benchmarks via the repo so results can be tracked.
βΈ»
How to fine-tune
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, Trainer
from datasets import load_dataset
repo = "reaperdoesntknow/MoA-150M"
tok = AutoTokenizer.from_pretrained(repo)
model = AutoModelForCausalLM.from_pretrained(repo)
ds = load_dataset("yzhuang/Agentic-Long-Context-Understanding-QA", split="train[:2%]")
def tok_fn(ex):
x = tok(
ex["question"] + "\n" + ex["context"] + "\nAnswer:",
truncation=True,
max_length=512,
)
x["labels"] = x["input_ids"].copy()
return x
tds = ds.map(tok_fn, remove_columns=ds.column_names)
args = TrainingArguments(
output_dir="./moa400m-finetune",
per_device_train_batch_size=2,
gradient_accumulation_steps=1,
num_train_epochs=1,
learning_rate=5e-4,
weight_decay=0.0,
warmup_steps=100,
logging_steps=10,
save_steps=200,
fp16=False,
bf16=False,
)
trainer = Trainer(model=model, args=args, train_dataset=tds)
trainer.train()
Known behaviors / tips β’ Context > 1024: works, but CPU throughput drops; BlackHoleRoPE helps stability, not throughput. β’ Sampling: always pass pad_token_id (often eos_token_id) to .generate(); avoid temperature > 1.2 on small models. β’ KV cache: supported; for CPU you may prefer smaller beams and greedy/small-temperature sampling.
Safety & responsibility
This is a research model. It was trained on public datasets and may produce incorrect or biased content. Do not rely on it for advice or sensitive decisions.
Citation
@software{moa_metric_lm_400m, title = {MoA-Metric-LM-400M: Distance-based attention with BlackHoleRoPE}, author = {reaperdoesntknow}, year = {2025}, url = {https://huggingface.co/reaperdoesntknow/MoA-400M} }
Acknowledgements
Built with π€ Transformers and a metric-first rethinking of attention. BlackHoleRoPE draws inspiration from symplectic/rotational encodings and bounded-energy dynamics.
- Downloads last month
- 20