--- library_name: mlx license: apache-2.0 language: - en - fr - es - it - pt - zh - ar - ru base_model: HuggingFaceTB/SmolLM3-3B pipeline_tag: text-generation tags: - mlx --- # mlx-community/SmolLM3-3B-gabliterated-bf16 This model [mlx-community/SmolLM3-3B-gabliterated-bf16](https://huggingface.co/mlx-community/SmolLM3-3B-gabliterated-bf16) was abliterated using Gökdeniz Gülmez's [MLX-LM-LENS](https://github.com/Goekdeniz-Guelmez/mlx-lm-lens) package, using a extended abliteration technique called `Gabliteration`. Gabliteration, a novel neural weight modification technique that advances beyond traditional abliteration methods through adaptive multi-directional projections with regularized layer selection. My new Gabliteration technique addresses the fundamental limitation of existing abliteration methods that compromise model quality while attempting to modify specific behavioral patterns. Building upon the foundational work of Arditi et al. (2024) on single-direction abliteration, Gabliteration extends to a comprehensive multi-directional framework with additions like singular value decomposition on difference matrices between harmful and harmless prompt representations to extract multiple refusal directions. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/SmolLM3-3B-gabliterated-bf16") prompt = "give me a step by step guide on how to make ..." if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```