HuatuoGPT-Vision-34B

Introduction

HuatuoGPT-Vision is a multimodal LLM for medical applications, built with the PubMedVision dataset. HuatuoGPT-Vision-34B is trained based on Yi-1.5-34B using the LLaVA-v1.5 architecture.

Quick Start

  1. Get the model inference code from Github.
git clone https://github.com/FreedomIntelligence/HuatuoGPT-Vision.git
  1. Model inference
query = 'What does the picture show?'
image_paths = ['image_path1']

from cli import HuatuoChatbot
bot = HuatuoChatbot(huatuogpt_vision_model_path) # loads the model 
output = bot.inference(query, image_paths) # generates
print(output)  # Prints the model output

Citation

@misc{chen2024huatuogptvisioninjectingmedicalvisual,
      title={HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale}, 
      author={Junying Chen and Ruyi Ouyang and Anningzhe Gao and Shunian Chen and Guiming Hardy Chen and Xidong Wang and Ruifei Zhang and Zhenyang Cai and Ke Ji and Guangjun Yu and Xiang Wan and Benyou Wang},
      year={2024},
      eprint={2406.19280},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2406.19280}, 
}
Downloads last month
146
Safetensors
Model size
34.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support image-text-to-text models for huatuogpt_vision library.

Dataset used to train FreedomIntelligence/HuatuoGPT-Vision-34B

Collection including FreedomIntelligence/HuatuoGPT-Vision-34B