FacialMMT / README.md
devin777's picture
Update README.md
a9f95ad
|
raw
history blame
944 Bytes
metadata
license: gpl-3.0

FacialMMT

This repo contains the data and pretrained models for FacialMMT, a framework that uses facial sequences of real speaker to help multimodal emotion recognition.

The model performance on MELD test set is:

Release W-F1(%)
07-10-23 66.73

It is currently ranked third on paperswithcode.

If you're interested, please check out this repo for more in-detail explanation of how to use our model.

Paper: A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations. In Proceedings of ACL 2023 (Main Conference), pp. 15445–15459.

Authors: Wenjie Zheng, Jianfei Yu, Rui Xia, and Shijin Wang