This is a MLC converted weight from aya-expanse-8b model in MLC format q4f16_1.

The model can be used for projects MLC-LLM and WebLLM.

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for huggingkot/aya-expanse-8b-q4f16_1-MLC

Quantized
(39)
this model

Collection including huggingkot/aya-expanse-8b-q4f16_1-MLC