SaisExperiments/ToastyPigeon_gemma-3-27b-storyteller-GGUF

Quantization Overview

GGUF files ahead! This repository holds GGUF quantized versions derived from the storyteller model: ToastyPigeon/gemma-3-27b-experiment-storyteller.

Quantization executed by SaisExperiments.

Base Model Blueprint

Need the full schematics? original model card. Consult it for no details at all x3.

Input Protocol (Instruct Format)

Feed instructions using the standard Gemma 2/3 instruct format. Structure is critical. Adhere strictly to the tokens. An optional system role might be recognized depending on the base fine-tune.

<start_of_turn>system
{optional system prompt here}<end_of_turn>
<start_of_turn>user
{User messages. You can also place the system prompt here.}<end_of_turn>
<start_of_turn>model
{Model's response}<end_of_turn>

Mandate: Follow the format precisely. Deviations compromise output integrity.

GGUF Quantization by: SaisExperiments
Downloads last month
481
GGUF
Model size
27B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for SaisExperiments/ToastyPigeon_gemma-3-27b-experiment-storyteller-GGUF

Quantized
(3)
this model