---
library_name: transformers
base_model: roberta-large
metrics:
- f1
model-index:
- name: roberta-large-emopillars-contextual
results: []
---
# roberta-large-emopillars-contextual
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on [EmoPillars'](https://huggingface.co/datasets/alex-shvets/EmoPillars) [_context-full_](https://huggingface.co/datasets/alex-shvets/EmoPillars/tree/main/context-full) subset.
## Model description
The model is a multi-label classifier over 28 emotional classes for a context-aware scenario. It takes as input a context concatenated with a character description and an utterance, and extracts emotions only from the utterance.
## How to use
Here is how to use this model:
```python
>>> import torch
>>> from transformers import pipeline
>>> model_name = "roberta-large-emopillars-contextual"
>>> threshold = 0.5
>>> emotions = [
>>> "admiration", "amusement", "anger", "annoyance", "approval", "caring", "confusion",
>>> "curiosity", "desire", "disappointment", "disapproval", "disgust", "embarrassment",
>>> "excitement", "fear", "gratitude", "grief", "joy", "love", "nervousness", "optimism",
>>> "pride", "realization", "relief", "remorse", "sadness", "surprise", "neutral"
>>> ]
>>> label_to_emotion = dict(zip(list(range(len(emotions))), emotions))
>>> device = torch.device("cuda" if torch.cuda.is_available() else "CPU")
>>> pipe = pipeline("text-classification", model=model_name, truncation=True,
>>> return_all_scores=True, device=-1 if device.type=="cpu" else 0)
>>> # input in a format f"{context} {character}: \"{utterance}\""
>>> utterances_in_contexts = [
>>> "A user watched a video of a musical performance on YouTube. This user expresses an opinion and thoughts. User: \"Ok is it just me or is anyone else getting goosebumps too???\"",
>>> "User: \"Sorry\", Conversational agent: \"Sorry for what??\", User: \"Don’t know what to do\""
>>> ]
>>> outcome = pipe(utterances_in_contexts)
>>> dominant_classes = [
>>> [prediction for prediction in example if prediction['score'] >= threshold]
>>> for example in outcome
>>> ]
>>> for example in dominant_classes:
>>> print(", ".join([
>>> "%s: %.2lf" % (label_to_emotion[int(prediction['label'])], prediction['score'])
>>> for prediction in sorted(example, key=lambda x: x['score'], reverse=True)
>>> ]))
surprise: 0.99, amusement: 0.87, curiosity: 0.60, nervousness: 0.58
confusion: 0.97, nervousness: 0.76, embarrassment: 0.65
```
## Training data
The training data consists of 93,979 samples of [EmoPillars'](https://huggingface.co/datasets/alex-shvets/EmoPillars) [_context-full_](https://huggingface.co/datasets/alex-shvets/EmoPillars/tree/main/context-full) subset created using [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) within [our data synthesis pipeline EmoPillars on GitHub](https://github.com/alex-shvets/emopillars). [WikiPlots](https://github.com/markriedl/WikiPlots) was used as a seed corpus.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 752
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0a0+gite3b9b71
- Datasets 2.21.0
- Tokenizers 0.19.1
## Evaluation
Scores for the evaluation on the EmoPillars' "context-full" test split:
| **class** | **precision**| **recall** | **f1-score** | **support** |
| :--- | :---: | :---: | :---: | ---: |
| admiration | 0.72 | 0.68 | 0.70 | 635 |
| amusement | 0.79 | 0.63 | 0.70 | 211 |
| anger | 0.86 | 0.82 | 0.84 | 1155 |
| annoyance | 0.80 | 0.76 | 0.78 | 865 |
| approval | 0.58 | 0.42 | 0.49 | 250 |
| caring | 0.66 | 0.60 | 0.63 | 485 |
| confusion | 0.76 | 0.78 | 0.77 | 1283 |
| curiosity | 0.83 | 0.79 | 0.81 | 780 |
| desire | 0.80 | 0.75 | 0.77 | 864 |
| disappointment | 0.79 | 0.80 | 0.80 | 1264 |
| disapproval | 0.55 | 0.47 | 0.51 | 445 |
| disgust | 0.73 | 0.60 | 0.66 | 320 |
| embarrassment | 0.65 | 0.50 | 0.57 | 116 |
| excitement | 0.74 | 0.71 | 0.73 | 685 |
| fear | 0.87 | 0.85 | 0.86 | 990 |
| gratitude | 0.79 | 0.74 | 0.76 | 155 |
| grief | 0.79 | 0.71 | 0.75 | 133 |
| joy | 0.80 | 0.78 | 0.79 | 668 |
| love | 0.70 | 0.61 | 0.65 | 254 |
| nervousness | 0.81 | 0.80 | 0.80 | 1368 |
| optimism | 0.82 | 0.76 | 0.79 | 506 |
| pride | 0.85 | 0.82 | 0.83 | 497 |
| realization | 0.74 | 0.57 | 0.64 | 120 |
| relief | 0.76 | 0.67 | 0.71 | 211 |
| remorse | 0.59 | 0.53 | 0.56 | 206 |
| sadness | 0.80 | 0.79 | 0.79 | 922 |
| surprise | 0.80 | 0.78 | 0.79 | 852 |
| neutral | 0.67 | 0.57 | 0.61 | 392 |
| **micro avg** | 0.78 | 0.74 | 0.76 | 16632 |
| **macro avg** | 0.75 | 0.69 | 0.72 | 16632 |
| **weighted avg** | 0.78 | 0.74 | 0.76 | 16632 |
| **samples avg** | 0.79 | 0.76 | 0.75 | 16632 |
When fine-tuned on downstream tasks, this model achieves the following results:
| **task** | **precision**| **recall** | **f1-score** |
| :--- | :---: | :---: | :---: |
| EmoContext (dev) | 0.81 | 0.83 | 0.82 |
| EmoContext (test) | 0.76 | 0.78 | 0.77 |
For more details on the evaluation, please visit our [GitHub repository](https://github.com/alex-shvets/emopillars) or [paper](https://arxiv.org/abs/2504.16856).
## Citation information
If you use this model, please cite our [paper](https://arxiv.org/abs/2504.16856):
```bibtex
@misc{shvets2025emopillarsknowledgedistillation,
title={Emo Pillars: Knowledge Distillation to Support Fine-Grained Context-Aware and Context-Less Emotion Classification},
author={Alexander Shvets},
year={2025},
eprint={2504.16856},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.16856}
}
```
## Disclaimer
Click to expand
The model published in this repository is intended for a generalist purpose and is available to third parties. This model may have bias and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or using systems based on this model) or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the creator of the model be liable for any results arising from the use made by third parties of this model.