Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
3
3
5
Sam Purkis
SamPurkis
Follow
jonathan-roberts1's profile picture
Illia56's profile picture
21world's profile picture
3 followers
·
5 following
smpurkis
sam-purkis-4baa6668
AI & ML interests
None yet
Recent Activity
updated
a model
about 18 hours ago
SamPurkis/gpt-oss-puzzle-88B-GGUF
published
a model
about 18 hours ago
SamPurkis/gpt-oss-puzzle-88B-GGUF
reacted
to
eaddario
's
post
with 👍
about 2 months ago
Experimental global target bits‑per‑weight quantization of mistralai/Ministral-3-14B-Instruct-2512 and mistralai/Ministral-3-14B-Reasoning-2512 Unlike standard llama.cpp quantizations that rely on fixed type heuristics (e.g., Q4_K_M), the Target BPW approach optimizes per-tensor precision where it matters the most, and produces high quality models that meet a precise global file size target. Key Advantages: - VRAM Maximization: Can generate high quality models sized exactly to fit hardware constraints (e.g., fitting the model into exactly 24GB VRAM). - Data-Driven Precision: Quantization mix is determined by actual weight error sensitivity rather than hardcoded rules, often yielding better PPL/KLD size trade-offs. Full benchmarks (PPL, KLD, ARC, MMLU, etc.) and methodology in the models' cards https://huggingface.co/eaddario/Ministral-3-14B-Instruct-2512-GGUF https://huggingface.co/eaddario/Ministral-3-14B-Reasoning-2512-GGUF
View all activity
Organizations
None yet
SamPurkis
's datasets
None public yet