metadata
base_model:
- google/gemma-3-27b-it-qat
gemma-3-27b-it-qat GGUF
Recommended way to run this model:
llama-server -hf ggml-org/gemma-3-27b-it-qat-GGUF -c 0 -fa
Then, access http://localhost:8080
base_model:
- google/gemma-3-27b-it-qat
Recommended way to run this model:
llama-server -hf ggml-org/gemma-3-27b-it-qat-GGUF -c 0 -fa
Then, access http://localhost:8080