(A)lgorithmic (P)attern (E)mulation - Fiction! (Gemma 3 4b instruct)

This model was finetuned with a merge of the following three datasets.

The merged dataset has about 6900 entries. Each entry in the dataset is either a chapter from a novel/novella, or if short enough, it's a whole short story.

I also produced a Q_8 quantization gguf: https://huggingface.co/leftyfeep/ape-fiction-gemma-3-4b-Q8_0-GGUF

Uploaded finetuned model

  • Developed by: leftyfeep
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-3-4b-it-unsloth-bnb-4bit

This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
16
Safetensors
Model size
4.3B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for leftyfeep/ape-fiction-gemma-3-4b

Finetuned
(716)
this model
Quantizations
2 models