--- license: llama3.3 base_model: meta-llama/Llama-3.3-70B-Instruct tags: - llama-3.3 - finetune - roleplay - chat - wings-of-fire - dungeon-master ---
Send me your support to help me feed the data beast! also taking comissions for universe specific models
Support on Ko-fiFor the best roleplaying experience, it is highly recommended to use the provided character card and lore book. These files help guide the model's persona and provide rich, in-universe context.
Download Files →The GGUF quantized model files are available for download. Click the button below to view the files.
Download GGUF Files →This is Version 6.1 (Experimental) of the fine-tuned meta-llama/Llama-3.3-70B-Instruct
, specialized for roleplaying within the Wings of Fire universe. V6.1 is a significant evolution, trained on a larger, more focused dataset built entirely from canon lore and "what-if" scenarios from the book series.
The goal of this model is to provide the most lore-accurate and immersive conversational experience to date. It can adopt canon character personas with high fidelity, explore alternate timelines from the books, and guide the narrative with new interactive elements.
A surprising outcome of this highly specialized training is that users have reported V6.1 is also very capable of general, non-WOF roleplay, making it a more versatile creative partner than previous versions.
This model was trained for a total of 2.3 epochs (experimental) on a single NVIDIA RTX PRO 6000 Blackwell, generously provided by @Quͫaͦcͦk. The original V6 was trained for 2 epochs; this version adds an extra 0.3 epochs on top of that base.
A QLoRA (Quantized Low-Rank Adaptation) approach was used for efficient fine-tuning, with an optimized process configured using Axolotl.
V6.1 was fine-tuned on a completely new dataset of 3,200 high-quality examples with several key improvements:
You arrive in front of Queen Scarlet. What do you do? A)... B)... C)...
**scene transitions**
, resulting in a cleaner and more natural narrative style.**scene transitions**
. The model should now produce cleaner prose.