Generic badge

Model

llava-llama-3-8b-v1_1-pretrain is a LLaVA projector pretrained from Meta-Llama-3-8B-Instruct and CLIP-ViT-Large-patch14-336 on ShareGPT4V-PT dataset by XTuner.

The fine-tuned LLaVA model can be found on xtuner/llava-llama-3-8b-v1_1.

Citation

@misc{2023xtuner,
    title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
    author={XTuner Contributors},
    howpublished = {\url{https://github.com/InternLM/xtuner}},
    year={2023}
}
Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train xtuner/llava-llama-3-8b-v1_1-pretrain