Korean Emotion Classifier ππ‘π’π¨π²π
λ³Έ λͺ¨λΈμ νκ΅μ΄ ν
μ€νΈλ₯Ό **6κ°μ§ κ°μ (λΆλ
Έ, λΆμ, μ¬ν, νμ¨, λΉν©, κΈ°μ¨)**μΌλ‘ λΆλ₯ν©λλ€.
klue/roberta-base
κΈ°λ°μΌλ‘ νμΈνλλμμ΅λλ€.
π Evaluation Results
Emotion | Precision | Recall | F1-Score |
---|---|---|---|
λΆλ Έ | 0.9801 | 0.9788 | 0.9795 |
λΆμ | 0.9864 | 0.9848 | 0.9856 |
μ¬ν | 0.9837 | 0.9854 | 0.9845 |
νμ¨ | 0.9782 | 0.9750 | 0.9766 |
λΉν© | 0.9607 | 0.9668 | 0.9652 |
κΈ°μ¨ | 0.9857 | 0.9886 | 0.9872 |
Accuracy: 0.9831 Macro Avg: Precision=0.9791 / Recall=0.9804 / F1=0.9798 Weighted Avg: Precision=0.9831 / Recall=0.9831 / F1=0.9831
from transformers import pipeline
import torch
model_id = "Seonghaa/korean-emotion-classifier-roberta"
device = 0 if torch.cuda.is_available() else -1 # GPU μμΌλ©΄ 0, μμΌλ©΄ CPU(-1)
clf = pipeline(
"text-classification",
model=model_id,
tokenizer=model_id,
device=device
)
texts = [
"μ€λ κΈΈμμ 10λ§μμ μ£Όμ μ΄",
"μ€λ μΉκ΅¬λ€μ΄λ λ
Έλλ°©μ κ°μ΄",
"μ€λ μν λ§μ³€μ΄",
]
for t in texts:
pred = clf(t, truncation=True, max_length=256)[0]
print(f"μ
λ ₯: {t}")
print(f"β μμΈ‘ κ°μ : {pred['label']}, μ μ: {pred['score']:.4f}
")
μΆλ ₯ μμ:
μ
λ ₯: μ€λ κΈΈμμ 10λ§μμ μ£Όμ μ΄
β μμΈ‘ κ°μ : κΈ°μ¨, μ μ: 0.9619
μ
λ ₯: μ€λ μΉκ΅¬λ€μ΄λ λ
Έλλ°©μ κ°μ΄
β μμΈ‘ κ°μ : κΈ°μ¨, μ μ: 0.9653
μ
λ ₯: μ€λ μν λ§μ³€μ΄
β μμΈ‘ κ°μ : μ¬ν, μ μ: 0.9602
- Downloads last month
- 76