The model is quantized using https://github.com/WanBenLe/AutoAWQ-with-llava-v1.6.git
The source model is llava-hf/llava-v1.6-mistral-7b-hf
- Downloads last month
- 62
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the HF Inference API does not support transformers models with pipeline type image-text-to-text