AMKCode/gemma-2-2b-it-q4f32_1-MLC

This model was compiled using MLC-LLM with q4f32_1 quantization from google/gemma-2-2b-it. The conversion was done using the MLC-Weight-Conversion space.

To run this model, please first install MLC-LLM.

To chat with the model on your terminal:

mlc_llm chat HF://AMKCode/gemma-2-2b-it-q4f32_1-MLC

For more information on how to use MLC-LLM, please visit the MLC-LLM documentation.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for AMKCode/gemma-2-2b-it-q4f32_1-MLC

Base model

google/gemma-2-2b
Finetuned
(151)
this model