Korean Emotion Classifier πŸ˜ƒπŸ˜‘πŸ˜’πŸ˜¨πŸ˜²πŸ˜Œ

λ³Έ λͺ¨λΈμ€ ν•œκ΅­μ–΄ ν…μŠ€νŠΈλ₯Ό **6κ°€μ§€ 감정(λΆ„λ…Έ, λΆˆμ•ˆ, μŠ¬ν””, ν‰μ˜¨, λ‹Ήν™©, 기쁨)**으둜 λΆ„λ₯˜ν•©λ‹ˆλ‹€. klue/roberta-base 기반으둜 νŒŒμΈνŠœλ‹λ˜μ—ˆμŠ΅λ‹ˆλ‹€.


πŸ“Š Evaluation Results

Emotion Precision Recall F1-Score
λΆ„λ…Έ 0.9801 0.9788 0.9795
λΆˆμ•ˆ 0.9864 0.9848 0.9856
μŠ¬ν”” 0.9837 0.9854 0.9845
ν‰μ˜¨ 0.9782 0.9750 0.9766
λ‹Ήν™© 0.9607 0.9668 0.9652
기쁨 0.9857 0.9886 0.9872

Accuracy: 0.9831 Macro Avg: Precision=0.9791 / Recall=0.9804 / F1=0.9798 Weighted Avg: Precision=0.9831 / Recall=0.9831 / F1=0.9831

from transformers import pipeline
import torch

model_id = "Seonghaa/korean-emotion-classifier-roberta"

device = 0 if torch.cuda.is_available() else -1  # GPU 있으면 0, μ—†μœΌλ©΄ CPU(-1)

clf = pipeline(
    "text-classification",
    model=model_id,
    tokenizer=model_id,
    device=device
)

texts = [
    "였늘 κΈΈμ—μ„œ 10λ§Œμ›μ„ μ£Όμ› μ–΄",
    "였늘 μΉœκ΅¬λ“€μ΄λž‘ λ…Έλž˜λ°©μ— κ°”μ–΄",
    "였늘 μ‹œν—˜ 망쳀어",
]

for t in texts:
    pred = clf(t, truncation=True, max_length=256)[0]
    print(f"μž…λ ₯: {t}")
    print(f"β†’ 예츑 감정: {pred['label']}, 점수: {pred['score']:.4f}
")

좜λ ₯ μ˜ˆμ‹œ:

μž…λ ₯: 였늘 κΈΈμ—μ„œ 10λ§Œμ›μ„ μ£Όμ› μ–΄
β†’ 예츑 감정: 기쁨, 점수: 0.9619

μž…λ ₯: 였늘 μΉœκ΅¬λ“€μ΄λž‘ λ…Έλž˜λ°©μ— κ°”μ–΄
β†’ 예츑 감정: 기쁨, 점수: 0.9653

μž…λ ₯: 였늘 μ‹œν—˜ 망쳀어
β†’ 예츑 감정: μŠ¬ν””, 점수: 0.9602

Downloads last month
76
Safetensors
Model size
111M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support