Emoloom-2B
Emoloom-2B is a ~2B-parameter emotion understanding model that outputs multi-label emotion categories and continuous VAD (Valence, Arousal, Dominance) for dialogue utterances. It is fine-tuned from Qwen/Qwen1.5-1.8B-Chat with SFT on a curated mix of GoEmotions and Empathetic Dialogues, plus consistency constraints to keep JSON outputs robust and parsing-friendly.
Output format (single line JSON):
{"labels": ["sad","anxious"], "vad": {"v": 0.42, "a": 0.31, "d": 0.28}, "rationale": "short evidence"}
β¨ Highlights
- Dual signal: multi-label categories + continuous VAD in [0,1], two decimals.
- Robust JSON: training disables KV cache during generation for consistent formatting.
- Long-tail focus: sampling and weak-label cleanup reduce βmode collapseβ onto majority classes.
- Paper-ready figures: bundled plotting code exports high-res bar/radar/CI-band PNGs.
π Results (dev & cross-corpus)
Exp | Macro-F1 | Macro-P | Macro-R | VAD(1-RMSE) | ParseOK | n(dev) |
---|---|---|---|---|---|---|
sft_qwen_mix2080 |
0.3500 | 0.5000 | 0.2693 | 0.9417 | 1.000 | 3663 |
sft_qwen_mix5050 |
0.3470 | 0.5000 | 0.2657 | 0.9337 | 1.000 | 3309 |
sft_qwen_mix8020 |
0.3341 | 0.5000 | 0.2509 | 0.9135 | 1.000 | 2068 |
sft_qwen_mix2080_dd_quick (DailyDialog, quick) |
0.3071 | 0.5000 | 0.2136 | 0.8066 | 0.976 | 6261 |
Notes:
ParseOK
= fraction of generations that are valid, one-line JSON.- VAD score is reported as 1 β RMSE (higher is better).
π§ Model Details
- Base:
Qwen/Qwen1.5-1.8B-Chat
- Size: ~1.8B params
- Architecture: causal decoder-only transformer
- Precision: BF16 training, eval in BF16/FP16/FP32 fallback
- Tokenizer: Qwen tokenizer (pad set to EOS if missing)
π§Ύ Training Data & Processing
- Sources: GoEmotions (multi-label), Empathetic Dialogues (dialogue empathy).
- Mixing: ratios explored (20:80, 50:50, 80:20); 20:80 gave the best trade-off.
- QC: remove toxic/unclear; enforce min VAD confidence; short rationale template.
- Target JSON:
{labels, vad:{v,a,d}, rationale}
with two-decimal VAD.
βοΈ Fine-tuning Setup (SFT)
- Max length: typically 1024β1536 tokens (adaptive truncation for stability)
- Batch: micro-batch 1, gradient accumulation up to 128 (OOM-safe)
- LR: ~1.2e-5 cosine decay, warmup ~3%
- Stability: gradient checkpointing;
use_cache=False
at train/eval
β Evaluation
- Prompts build a short system + user pair (context + utterance).
- Greedy decode, max_new_tokens ~196 (quick eval uses 48).
- Metrics:
- Multi-label Macro-F1 / P / R on gold label space
- VAD 1βRMSE on [v,a,d]
- ParseOK for JSON validity
π Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import json, torch
name = "Lixeeone/Emoloom-2B"
tok = AutoTokenizer.from_pretrained(name, trust_remote_code=True, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(name, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
if tok.pad_token_id is None:
tok.pad_token = tok.eos_token
model.config.use_cache = False # keep output format stable
context = "We argued last night but made up this morning."
utterance = "Iβm still a bit shaken though."
sys = ("You are an empathetic assistant. Identify emotion labels (multi-label) "
"and estimate VAD (Valence, Arousal, Dominance in [0,1]). Respond with STRICT one-line JSON only.")
usr = (
"Task: Read the text and provide emotion labels and VAD with two decimals, plus a brief rationale (<=30 words).\n"
"Return JSON ONLY, single line:\n"
'{{"labels": [...], "vad": {{"v": 0.00, "a": 0.00, "d": 0.00}}, "rationale": "..."}}\n'
f"Context: {context}\n"
f'Text: "{utterance}"'
)
msgs = [{"role":"system","content":sys},{"role":"user","content":usr}]
prompt = tok.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)
inp = tok(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
out = model.generate(**inp, max_new_tokens=128, do_sample=False, use_cache=False)
gen = tok.decode(out[0][inp["input_ids"].shape[1]:], skip_special_tokens=True)
pred = json.loads(gen) # {"labels":[...], "vad":{"v":..,"a":..,"d":..}, "rationale": "..."}
print(pred)
Model tree for Lixeeone/Emoloom-2B
Base model
Qwen/Qwen1.5-1.8B-ChatDataset used to train Lixeeone/Emoloom-2B
Evaluation results
- macro_f1 on Mixed (GoEmotions + Empathetic Dialogues)self-reported0.350
- macro_precision on Mixed (GoEmotions + Empathetic Dialogues)self-reported0.500
- macro_recall on Mixed (GoEmotions + Empathetic Dialogues)self-reported0.269
- vad_1_minus_rmse on Mixed (GoEmotions + Empathetic Dialogues)self-reported0.942
- parse_ok on Mixed (GoEmotions + Empathetic Dialogues)self-reported1.000
- macro_f1 on OpenDataLab/DailyDialogself-reported0.307
- vad_1_minus_rmse on OpenDataLab/DailyDialogself-reported0.807
- parse_ok on OpenDataLab/DailyDialogself-reported0.976