Meta-Llama-3.1-8B-Instruct-Add-Speech-Token-4096-Nostrip

Introduction

This repo contains the Meta-Llama-3.1-8B-Instruct-Add-Speech-Token-4096-Nostrip model utilized to train the EMOVA series of models. Based on the original Llama-3.1-8B-Instruct checkpoint, we further insert speech tokens into its vocabulary for end-to-end omni-modal alignment as follows. The total number of speech tokens in EMOVA speech tokenizer is 4096. Therefore, it should be utilized as initialization in the Stage 2: Omni-modal text-centric alignment of EMOVA training.

# Source code can be found https://github.com/emova-ollm/EMOVA#insert-speech-tokens-into-llm-vocabulary
python scripts/insert_speech_token.py \
  --origin_model_path meta-llama/Llama-3.1-8B-Instruct \
  --saved_model_path ./Meta-Llama-3.1-8B-Instruct_add_speech_token_4096_nostrip \
  --num_speech_tokens 4096

Usage

To train EMOVA with Meta-Llama-3.1-8B-Instruct_add_speech_token_4096_nostrip, we need to create a new model config, and set the language_model parameters as follows. An example is provided here. Check more details on training EMOVA in our github repo.

language_model=dict(
  type='EmovaLlamaForCausalLM',                                              -- Wrapper class type for EMOVA
  pretrained_model_name_or_path='Emova-ollm/Meta-Llama-3.1-8B-Instruct_add_speech_token_4096_nostrip',  -- HuggingFace repo of pre-trained LLM
  attn_implementation="flash_attention_2",                                   -- Attention type
  from_pretrained=True,                                                      -- Load pre-trained weights
),

Citation

@article{chen2024emova,
  title={Emova: Empowering language models to see, hear and speak with vivid emotions},
  author={Chen, Kai and Gou, Yunhao and Huang, Runhui and Liu, Zhili and Tan, Daxin and Xu, Jing and Wang, Chunwei and Zhu, Yi and Zeng, Yihan and Yang, Kuo and others},
  journal={arXiv preprint arXiv:2409.18042},
  year={2024}
}

@article{grattafiori2024llama,
  title={The llama 3 herd of models},
  author={Grattafiori, Aaron and Dubey, Abhimanyu and Jauhri, Abhinav and Pandey, Abhinav and Kadian, Abhishek and Al-Dahle, Ahmad and Letman, Aiesha and Mathur, Akhil and Schelten, Alan and Vaughan, Alex and others},
  journal={arXiv preprint arXiv:2407.21783},
  year={2024}
}
Downloads last month
1
Safetensors
Model size
7.52B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Emova-ollm/Meta-Llama-3.1-8B-Instruct_add_speech_token_4096_nostrip-2

Finetuned
(1222)
this model