--- license: apache-2.0 --- # VCoder LLaVA-1.5-13b VCoder LLaVA-1.5-13b was trained on COST training dataset in December 2023. It uses the pretrained [LLaVA-1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) model weights. It was introduced by Jain et al. in [this repository](https://github.com/SHI-Labs/VCoder). VCoder is an adapter for improving existing Multimodal LLMs at object-level perception tasks with the use of perception modalities as control inputs while retaining performance on other tasks. ![img](https://praeclarumjj3.github.io/vcoder/vcoder.svg) ### Citation ```bibtex @article{jain2023vcoder, title={{VCoder: Versatile Vision Encoders for Multimodal Large Language Models}}, author={Jitesh Jain and Jianwei Yang and Humphrey Shi}, journal={arXiv}, year={2023} } ```