Here, we provide AWQ-quantized versions of the most popular NVILA models. These files help you seamlessly deploy TinyChat to unlock the full potential of NVILA and your hardware.

One-command demo to chat with quantized NVILA models via llm-awq (NVILA-8B as an example):

cd llm-awq/tinychat
python nvila_demo.py --model-path PATH/TO/NVILA       \
    --quant_path NVILA-8B-w4-g128-awq-v2.pt  \
    --media PATH/TO/ANY/IMAGES/VIDEOS    \
    --act_scale_path NVILA-8B-VT-smooth-scale.pt \
    --all --chunk --model_type nvila

This command will download the quantized NVILA model and run a chat demo. If you’ve already downloaded the files, simply set the path to your local copies.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including Efficient-Large-Model/NVILA-AWQ