βββββ βββββ βββ β β βββ ββ ββββ ββββββ
β ββ ββ β ββ ββ β β β β β β β β β
β βββββ ββββ β β β β ββ β β β ββββ β β ββββββ
βββββββ ββ ββ ββ ββ β β β β β β β β β β β β
βββββ βββββ ββ ββ β β β ββββ β βββββ ββ βββββ
βββ β ββ β
β
ββΰ¨ΰ§Λ THE PRIMΓTOILE ENGINE Λΰ¨ΰ§βqΛβ
β Visual Novel generation under starlight β
Version | Type | Strengths | Weaknesses | Recommended Use |
---|---|---|---|---|
Secunda-0.1-GGUF / RAW | Instruction | - Most precise - Coherent code - Perfected Modelfile |
- Smaller context / limited flexibility | Production / Baseline |
Secunda-0.3-F16-QA | QA-based Input | - Acceptable for question-based generation | - Less accurate than 0.1 - Not as coherent |
Prototyping (QA mode) |
Secunda-0.3-F16-TEXT | Text-to-text | - Flexible for freeform tasks | - Slightly off - Modelfile-dependent |
Experimental / Text rewrite |
Secunda-0.3-GGUF | GGUF build | - Portable GGUF of 0.3 | - Inherits 0.3 weaknesses | Lightweight local testing |
Secunda-0.5-RAW | QA Natural | - Best QA understanding - Long-form generation potential |
- Inconsistent output length - Some instability |
Research / Testing LoRA |
Secunda-0.5-GGUF | GGUF build | - Portable, inference-ready version of 0.5 | - Shares issues of 0.5 | Offline experimentation |
Secunda-0.1-RAW | Instruction | - Same base as 0.1-GGUF | - Same as 0.1 | Production backup |
ββΊββSecunda βΎ 0.5 RAWββΊββ
For more concise and controlled results, I recommend using Secunda-0.1-GGUF (instruct) or Secunda-0.3-GGUF (natural)
Secunda-0.5-RAW is the third iteration of the Secunda visual novel generation series, built to transform natural language story prompts into fully structured, high-quality Ren'Py scripts β without relying on explicit instructions or formatting. It was fine-tuned on a carefully curated dataset of fictional .rpy
scenes, each paired with its original narrative concept.
This model is much heavier than its siblings: Secunda-0.1-RAW, Secunda-0.3-F16-QA and Secunda-0.3-F16-TEXT.
Secunda is now based on both direct QA and structured prompting.
π Model Highlights
- Instruction-Free Generation: Unlike Secunda-0.1, this version relies only on a natural prompt such as "A detective wakes up in a town where no one remembers him."
- Finetuned with QLoRA (FP16): Efficient low-resource finetuning using 16-bit precision on LLaMA 3.1 8B.
- Script-Style Outputs: Generates structured
.rpy
files, often including characters, backgrounds, dialogue, and areturn
. - More Creative Freedom: Produces more diverse narratives and styles, occasionally multi-scene outputs.
/!\ NO HUMAN-MADE DATA WAS USED TO TRAIN THIS AI ! Secunda takes much pride in making sure the training data is scripted ! /!\
If you like Visual Novels, please visit itch.io and support independant creators !
π§ Training Details
- Base model:
meta-llama/Meta-Llama-3.1-8B
- Finetuning: QLoRA 4-bit adapters (FP16 runtime)
- Data: 800+ handcrafted prompt/script pairs in
.jsonl
, each with a raw natural language idea and a full.rpy
script. - Hardware: NVIDIA RTX 4070
- Training time: ~12 hours, 20 epochs
π Prompt Style
This model works without instructions. Just write your idea naturally:
"An abandoned cafΓ© reopens every full moon to serve ghosts their last coffee."
β οΈ Known Issues
- May occasionally produce multiple unrelated scripts in one output if token limit is too high.
- Less deterministic than instruction-based versions β retries may help.
- Not always guaranteed to end with
return
(you may add it manually).
π Tips
- Keep prompts between 10β30 words for best coherence.
- If the output includes multiple scenes, you can split them manually.
- If needed, combine with
sanitize_output()
logic from Secunda-0.1 to postprocess outputs.
π Privacy
This model is private and intended for research and internal tooling for the PrimΓ©toile visual novel engine.
π License
Apache 2.0 β For research, testing, and development only.
β¨ Credits
Trained and maintained by Yaroster for the Secunda engine inside the PrimΓ©toile framework.