DeepSeek-VL2-DeepSeekMoE-Tiny

Introduction

This repo contains the DeepSeek-VL2-DeepSeekMoE-Tiny model utilized to train the EMOVA series of models. Different from traditional LLMs based on dense Transformers, DeepSeekMoE LLMs utilize an efficient sparse Mixture-of-Experts (MoE) architecture. In total, DeepSeek-VL2-DeepSeekMoE-Tiny contains 3B parameters, while only a 0.57B subset is activated for each token during inference. This DeepSeek-VL2-DeepSeekMoE-Tiny checkpoint is extracted from the DeepSeek-VL2-Tiny model.

This checkpoint does not contain speech tokens, and thus, should be utilized in the Stage 1: Vision-language pre-alignment of EMOVA training.

Usage

To train EMOVA with DeepSeek-VL2-DeepSeekMoE-Tiny, we need to create a new model config, and set the language_model parameters as follows. An example is provided here. Check more details on training EMOVA in our github repo.

language_model=dict(
  type='EmovaDeepseekV2ForCausalLM',                                         -- Wrapper class type for EMOVA
  pretrained_model_name_or_path='Emova-ollm/deepseek-vl2-deepseekmoe-tiny',  -- HuggingFace repo of pre-trained LLM
  attn_implementation="flash_attention_2",                                   -- Attention type
  from_pretrained=True,                                                      -- Load pre-trained weights
),

Citation

@article{chen2024emova,
  title={Emova: Empowering language models to see, hear and speak with vivid emotions},
  author={Chen, Kai and Gou, Yunhao and Huang, Runhui and Liu, Zhili and Tan, Daxin and Xu, Jing and Wang, Chunwei and Zhu, Yi and Zeng, Yihan and Yang, Kuo and others},
  journal={arXiv preprint arXiv:2409.18042},
  year={2024}
}

@article{wu2024deepseek,
  title={Deepseek-vl2: Mixture-of-experts vision-language models for advanced multimodal understanding},
  author={Wu, Zhiyu and Chen, Xiaokang and Pan, Zizheng and Liu, Xingchao and Liu, Wen and Dai, Damai and Gao, Huazuo and Ma, Yiyang and Wu, Chengyue and Wang, Bingxuan and others},
  journal={arXiv preprint arXiv:2412.10302},
  year={2024}
}
Downloads last month
1
Safetensors
Model size
2.93B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Emova-ollm/deepseek-vl2-deepseekmoe-tiny

Finetunes
1 model