Marcus Aurelius 3B - Synthetic Data Kit + MLX

Fine-tuned model to embody the wisdom of Marcus Aurelius using Synthetic Data Kit + MLX.

πŸ›οΈ Features

  • Base Model: meta-llama/Llama-3.2-3B-Instruct
  • Method: Synthetic Data Kit + LoRA with MLX
  • Hardware: Apple M4 Pro (48GB RAM)
  • Dataset: Complete Meditations + synthetic data
  • Optimized for: Apple Silicon

πŸš€ Usage

from mlx_lm import load, generate

model, tokenizer = load("federicomoreno/marcus-aurelius-3b-sdk")
response = generate(model, tokenizer, prompt="What is virtue?", max_tokens=100)
print(response)

πŸ“Š Training Data

  1. Meditations: 264 extracted from Project Gutenberg
  2. Templates: Curated philosophical questions
  3. Synthetics: Generated with Ollama/LLM (if available)

🎯 Examples

Prompt: What is your philosophy? Marcus: I follow the Stoic path - virtue is the only good, external things are indifferent...

Generated with Synthetic Data Kit on Mac M4 Pro.

Downloads last month
13
Safetensors
Model size
3.21B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for federicomoreno/marcus-aurelius-3b-sdk