metadata
language: vi
tags:
- emotion-recognition
- vietnamese
- phobert
license: apache-2.0
datasets:
- VSMEC
metrics:
- accuracy
- f1
model-index:
- name: phobert-emotion
results:
- task:
type: text-classification
name: Emotion Recognition
dataset:
name: VSMEC
type: custom
metrics:
- name: Accuracy
type: accuracy
value: <INSERT_ACCURACY>
- name: F1 Score
type: f1
value: <INSERT_F1_SCORE>
base_model:
- vinai/phobert-base
pipeline_tag: text-classification
PhoBERT-Emotion: Emotion Recognition for Vietnamese Text
This model is a fine-tuned version of vinai/phobert-base
on the VSMEC dataset for emotion recognition in Vietnamese text. It achieves competitive performance on this task.
Model Details
- Base Model:
vinai/phobert-base
- Dataset: VSMEC (Vietnamese Social Media Emotion Corpus)
- Fine-tuning Framework: HuggingFace Transformers
- Hyperparameters:
- Batch size:
32
- Learning rate:
5e-5
- Epochs:
100
- Max sequence length:
256
- Batch size:
Dataset
The model was trained on the VSMEC dataset, which contains Vietnamese social media text annotated with emotion labels. The dataset includes the following emotion categories:
{"Anger": 0, "Disgust": 1, "Enjoyment": 2, "Fear": 3, "Other": 4, "Sadness": 5, "Surprise": 6}
.
Results
The model was evaluated using the following metrics:
- Accuracy:
<INSERT_ACCURACY>
- F1 Score:
<INSERT_F1_SCORE>
Usage
You can use this model for emotion recognition in Vietnamese text. Below is an example of how to use it with the HuggingFace Transformers library:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("visolex/phobert-emotion")
model = AutoModelForSequenceClassification.from_pretrained("visolex/phobert-emotion")
text = "Tôi rất vui vì hôm nay trời đẹp!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=256)
outputs = model(**inputs)
predicted_class = outputs.logits.argmax(dim=-1).item()
print(f"Predicted emotion: {predicted_class}")