--- license: "cc-by-sa-4.0" language: - en tags: - multimodal - emotion recognition - CMU-MOSEI - computational-sequences - audio - video - text pretty_name: "CMU-MOSEI: Computational Sequences (Unofficial Mirror)" dataset_info: features: - name: CMU_MOSEI_COVAREP.csd type: binary - name: CMU_MOSEI_Labels.csd type: binary - name: CMU_MOSEI_OpenFace2.csd type: binary - name: CMU_MOSEI_TimestampedPhones.csd type: binary - name: CMU_MOSEI_TimestampedWordVectors.csd type: binary - name: CMU_MOSEI_TimestampedWords.csd type: binary - name: CMU_MOSEI_VisualFacet42.csd type: binary --- # CMU-MOSEI: Computational Sequences (Unofficial Mirror) This repository provides a **mirror of the official computational sequence files from the [CMU-MOSEI dataset](https://github.com/A2Zadeh/CMU-MultimodalSDK)**, which are required for multimodal sentiment and emotion research. The original download links are currently down, so this mirror is provided for the research community. > **Note:** This is an **unofficial mirror**. All data originates from Carnegie Mellon University and original authors. If you are a dataset creator and want this removed or modified, please open an issue. ## Dataset Structure - **CMU_MOSEI_COVAREP.csd**: Acoustic features (COVAREP) - **CMU_MOSEI_Labels.csd**: Sentiment/emotion labels and annotations - **CMU_MOSEI_OpenFace2.csd**: Facial features (OpenFace 2.0) - **CMU_MOSEI_TimestampedPhones.csd**: Timestamped phone (phoneme) alignments - **CMU_MOSEI_TimestampedWordVectors.csd**: Timestamped word embeddings (GloVe/Word2Vec) - **CMU_MOSEI_TimestampedWords.csd**: Timestamped word alignments - **CMU_MOSEI_VisualFacet42.csd**: Additional facial/action unit features All files are in `.csd` format and can be loaded using the [CMU Multimodal SDK](https://github.com/A2Zadeh/CMU-MultimodalSDK). ## Usage ```python from mmsdk import mmdatasdk # Example: Load the COVAREP file covarep = mmdatasdk.mmdataset({'covarep': 'CMU_MOSEI_COVAREP.csd'}) ``` ## Source - Original dataset: [CMU-MOSEI on Github](https://github.com/A2Zadeh/CMU-MultimodalSDK) - Official paper: Zadeh, A. et al. (2018). [CMU-MOSEI: A Multimodal Language Dataset for Sentiment and Emotion Analysis](https://arxiv.org/abs/1803.05449). ## License - **License:** CC BY-SA 4.0 - All data copyright: Carnegie Mellon University & original authors ## Citation If you use these files, please **cite the original authors**: ```bibtex @inproceedings{zadeh2018multimodal, title={Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion}, author={Zadeh, Amir and Chen, Minghai and Poria, Soujanya and Cambria, Erik and Morency, Louis-Philippe}, booktitle={Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={2236--2246}, year={2018} } ```