Papers
arxiv:2312.11011
VinaLLaMA: LLaMA-based Vietnamese Foundation Model
Published on Dec 18, 2023
Authors:
Abstract
In this technical report, we present VinaLLaMA, an open-weight, state-of-the-art (SOTA) Large Language Model for the Vietnamese language, built upon LLaMA-2 with an additional 800 billion trained tokens. VinaLLaMA not only demonstrates fluency in Vietnamese but also exhibits a profound understanding of Vietnamese culture, making it a truly indigenous model. VinaLLaMA-7B-chat, trained on 1 million high-quality synthetic samples, achieves SOTA results on key benchmarks, including VLSP, VMLU, and Vicuna Benchmark Vietnamese, marking a significant advancement in the Vietnamese AI landscape and offering a versatile resource for various applications.
Models citing this paper 20
Browse 20 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2312.11011 in a dataset README.md to link it from this page.