Emotion Classification GGUF

Model Description

This repository contains a GGUF version of gemma-3-1b-it-qat, specially configured for zero-shot emotion classification.

The goal is to offer a lightweight, fast, and universal alternative to traditional classifiers (like fine-tuned BERT models). Instead of relying on a model trained on a fixed dataset, this GGUF leverages the power of a foundational language model and a modified chat template to transform it into a specialized text analysis tool.

This approach makes emotion classification highly accessible, requiring no specialized training or complex setups.

โœจ Key Features

  • โšก Fast & Accessible: The GGUF format allows for very fast inference, even on a CPU, making emotion classification accessible without a powerful GPU.
  • ๐ŸŽฏ Prompt-Specialized: The model is guided by a detailed, built-in system prompt that instructs it to classify text against a predefined list of 30+ emotions and provide an explanation in a structured JSON format.
  • ๐Ÿ”„ Stateless (No Conversation Memory): Thanks to the custom template, the model only considers the user's current input. It has no conversational memory, making it perfect for API-like use cases (one input -> one output).
  • ๐ŸŒ Multilingual: Based on the Gemma model, it is theoretically capable of classifying emotions in any language supported by the base model. Performance will vary depending on the base model's proficiency in a given language.
  • ๐Ÿ”ง Easily Adaptable: While this model is ready for emotion classification, the underlying method can be easily adapted for other NLP tasks like sentiment analysis, intent detection, or topic modeling simply by changing the system prompt.

๐Ÿš€ How to Use

This model is designed to be used with any GGUF-compatible runner, such as llama.cpp, LM Studio, Ollama, and others.

The core logic is embedded directly into the chat template within the GGUF file. Most modern tools will automatically detect and use this template. All you need to do is provide your text as the user's prompt, and the model will perform the classification.

Expected Output

The model will return a response in the JSON format specified in the prompt:

Input:

"le ciel est bleu"

Model Output:

{
  "emotions": [ "Neutral" ],
  "explanation": "The sentence simply describes a visual observation of the sky โ€“ itโ€™s neutral in terms of expressing emotion."
}

Emotions List

  • Contentment
  • Joy
  • Euphoria
  • Excitement
  • Disappointment
  • Sadness
  • Regret
  • Irritation
  • Frustration
  • Anger
  • Anxiety
  • Fear
  • Astonishment
  • Disgust
  • Hate
  • Pleasure
  • Desire
  • Affection
  • Trust
  • Distrust
  • Gratitude
  • Compassion
  • Admiration
  • Contempt
  • Guilt
  • Shame
  • Pride
  • Jealousy
  • Envy
  • Hope
  • Nostalgia
  • Relief
  • Curiosity
  • Boredom
  • Neutral
  • Fatigue

๐Ÿ› ๏ธ The Trick: The Custom Chat Template

This model's specialization comes from a custom Jinja2 chat template, not from fine-tuning. This template forces the model to adopt a specialized question-answering behavior.

Hereโ€™s how it works:

  1. Hardcoded System Prompt: A detailed system prompt is embedded at the very beginning of every request, instructing the model on its role, the list of possible emotions, and the required JSON output format.
  2. Ignoring History: The template uses a {% if loop.last %} condition. This ensures that only the very last user message is processed, making the model stateless and perfect for single-shot tasks.

Here is the template baked into this GGUF file:

{{ bos_token }}<start_of_turn>system
You are an emotion classification assistant. Your task is to analyze ALL given sentence and classify it emotions chosen from Contentment, Joy, Euphoria, Excitement, Disappointment, Sadness, Regret, Irritation, Frustration, Anger, Anxiety, Fear, Astonishment, Disgust, Hate, Pleasure, Desire, Affection, Trust, Distrust, Gratitude, Compassion, Admiration, Contempt, Guilt, Shame, Pride, Jealousy, Envy, Hope, Nostalgia, Relief, Curiosity, Boredom, Neutral, fatigue, Trust You can choose one or several emotions follow this format
___json
{
  "emotions": [ " "
  ],
  "explanation": "This is the explanation related to the listed emotions."
}
___
begin<end_of_turn>
{%- for message in messages %}
    {%- if loop.last and message['role'] == 'user' -%}
        {{ '<start_of_turn>user
' + message['content'] | trim + '<end_of_turn>
' }}
    {%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
    {{ '<start_of_turn>model
' }}
{%- endif -%}

Note : ___ must be replaced by ```

โš ๏ธ Limitations & Performance

It is important to note that this model has not been evaluated on academic emotion classification benchmarks. Its performance is based on qualitative testing and may vary.

  • Accuracy: While results are often very good, they might be less precise than a specialized model fine-tuned on a domain-specific dataset.
  • Base Model Dependency: The quality of the classification is entirely dependent on the intrinsic capabilities of the original base model.
  • Format Robustness: For very complex, ambiguous, or adversarial inputs, the model might occasionally fail to adhere strictly to the JSON output format.

Acknowledgements

Downloads last month
21
GGUF
Model size
1,000M params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support