DeepSeek-VL2-DeepSeekMoE-Tiny-Add-Speech-Token-4096-Nostrip
Introduction
This repo contains the DeepSeek-VL2-DeepSeekMoE-Tiny-Add-Speech-Token-4096-Nostrip model utilized to train the EMOVA series of models. Different from traditional LLMs based on dense Transformers, DeepSeekMoE LLMs utilize an efficient sparse Mixture-of-Experts (MoE) architecture. In total, it contains 3B parameters, while only a 0.57B subset is activated for each token during inference. This checkpoint is extracted from the DeepSeek-VL2-Tiny model.
Based on the original DeepSeek-VL2-DeepSeekMoE-Tiny checkpoint, we further insert speech tokens into its vocabulary for end-to-end omni-modal alignment as follows. The total number of speech tokens in EMOVA speech tokenizer is 4096. Thus, it should be utilized as initialization in the Stage 2: Omni-modal text-centric alignment of EMOVA training.
# Source code can be found https://github.com/emova-ollm/EMOVA#insert-speech-tokens-into-llm-vocabulary
python scripts/insert_speech_token.py \
--origin_model_path Emova-ollm/deepseek-vl2-deepseekmoe-tiny \
--saved_model_path ./deepseek-vl2-deepseekmoe-tiny_add_speech_token_4096_nostrip \
--num_speech_tokens 4096
Usage
To train EMOVA with DeepSeek-VL2-DeepSeekMoE-Tiny-Add-Speech-Token-4096-Nostrip, we need to create a new model config, and set the language_model parameters as follows. An example is provided here. Check more details on training EMOVA in our github repo.
language_model=dict(
type='EmovaDeepseekV2ForCausalLM', -- Wrapper class type for EMOVA
pretrained_model_name_or_path='Emova-ollm/deepseek-vl2-deepseekmoe-tiny_add_speech_token_4096_nostrip', -- HuggingFace repo of pre-trained LLM
attn_implementation="flash_attention_2", -- Attention type
from_pretrained=True, -- Load pre-trained weights
),
Citation
@article{chen2024emova,
title={Emova: Empowering language models to see, hear and speak with vivid emotions},
author={Chen, Kai and Gou, Yunhao and Huang, Runhui and Liu, Zhili and Tan, Daxin and Xu, Jing and Wang, Chunwei and Zhu, Yi and Zeng, Yihan and Yang, Kuo and others},
journal={arXiv preprint arXiv:2409.18042},
year={2024}
}
@article{wu2024deepseek,
title={Deepseek-vl2: Mixture-of-experts vision-language models for advanced multimodal understanding},
author={Wu, Zhiyu and Chen, Xiaokang and Pan, Zizheng and Liu, Xingchao and Liu, Wen and Dai, Damai and Gao, Huazuo and Ma, Yiyang and Wu, Chengyue and Wang, Bingxuan and others},
journal={arXiv preprint arXiv:2412.10302},
year={2024}
}
- Downloads last month
- 5
Model tree for Emova-ollm/deepseek-vl2-deepseekmoe-tiny_add_speech_token_4096_nostrip
Base model
Emova-ollm/deepseek-vl2-deepseekmoe-tiny