llava-v1.6-mistral-7b-hf-nf4 is a bnb nf4 quant of llava-v1.6-mistral-7b-hf.

For batch processing you can use ide-cap-chan

All other features are inherited from the parent model.

Downloads last month
10
Safetensors
Model size
4.03B params
Tensor type
F32
FP16
U8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for 2dameneko/llava-v1.6-mistral-7b-hf-nf4

Quantized
(2)
this model