GGUF

Gemma-7B-it GGUF Quantized

Usage

This model can be used with the latest version of llama.cpp and LM Studio >0.2.16.

Downloads last month
19
GGUF
Model size
8.54B params
Architecture
gemma

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support