(A)lgorithmic (P)attern (E)mulation - Fiction! (Gemma 3 4b instruct)
This model was finetuned with a merge of the following three datasets.
- https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1
- https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo
- https://huggingface.co/datasets/leftyfeep/fiction-chapters-24kmax
The merged dataset has about 6900 entries. Each entry in the dataset is either a chapter from a novel/novella, or if short enough, it's a whole short story.
I also produced a Q_8 quantization gguf: https://huggingface.co/leftyfeep/ape-fiction-gemma-3-4b-Q8_0-GGUF
Uploaded finetuned model
- Developed by: leftyfeep
- License: apache-2.0
- Finetuned from model : unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 16
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for leftyfeep/ape-fiction-gemma-3-4b
Base model
google/gemma-3-4b-pt
Finetuned
google/gemma-3-4b-it
Quantized
unsloth/gemma-3-4b-it-unsloth-bnb-4bit