--- license: apache-2.0 datasets: - Mielikki/Erebus-87k - PocketDoc/Dans-MemoryCore-CoreCurriculum-Small - NewEden/Kalo-Opus-Instruct-22k-Refusal-Murdered - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - NewEden/Gryphe-Sonnet-3.5-35k-Subset - Nitral-AI/GU_Instruct-ShareGPT - Nitral-AI/Medical_Instruct-ShareGPT - AquaV/Resistance-Sharegpt - AquaV/US-Army-Survival-Sharegpt - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - ResplendentAI/bluemoon - hardlyworking/openerotica-freedomrp-sharegpt-system - MinervaAI/Aesir-Preview - anthracite-core/c2_logs_32k_v1.1 - Nitral-AI/Creative_Writing-ShareGPT - PJMixers/lodrick-the-lafted_OpusStories-Story2Prompt-ShareGPT - NewEden/Opus-accepted-hermes-rejected-shuffled language: - en base_model: - IntervitensInc/Mistral-Nemo-Base-2407-chatml ---
Golden-Curry-12B is a 12B parameter roleplaying language model built on the Mistral NeMo base. Designed for immersive, character-driven interactions, the model excels at staying in persona, dynamic storytelling, and emotionally engaging dialogue. Ideal for chat-based roleplay, interactive fiction, and character simulation.
Although based on Mistral NeMo, this model is ChatML compatible through and through. The tokenizer was modified to accept ChatML format prior to pretraining. Every subsequent step has reinforced the ChatML format.
This model began as a ChatML modified NeMo base model, which saw a custom pretraining stage on a large amount of narrative fiction. The pretrained model was then instruct tuned before receiving a final roleplaying tune in a separate step. Once the supervised fine-tuning was complete, a Kahneman-Tversky optimization was applied as a final alignment step.
The model was trained on a diverse collection of instruction and roleplaying data, including the following sets: