L3.3-70B-Animus-V6-Experimental

Send me your support to help me feed the data beast! also taking comissions for universe specific models
Support on Ko-fiCharacter Card & Lore Book
For the best roleplaying experience, it is highly recommended to use the provided character card and lore book. These files help guide the model's persona and provide rich, in-universe context.
Download Files →GGUF Quantized Models
The GGUF quantized model files are available for download. Click the button below to view the files.
Download GGUF Files →Model Description
This is Version 6 (Experimental) of the fine-tuned meta-llama/Llama-3.3-70B-Instruct
, specialized for roleplaying within the Wings of Fire universe. V6 is a significant evolution, trained on a larger, more focused dataset built entirely from canon lore and "what-if" scenarios from the book series.
The goal of this model is to provide the most lore-accurate and immersive conversational experience to date. It can adopt canon character personas with high fidelity, explore alternate timelines from the books, and guide the narrative with new interactive elements.
A surprising outcome of this highly specialized training is that users have reported V6 is also very capable of general, non-WOF roleplay, making it a more versatile creative partner than previous versions.
Training Details
Training Hardware
This model was trained for 2 epochs on a single NVIDIA RTX PRO 6000 Blackwell, generously provided by @Quͫaͦcͦk.
Training Procedure
A QLoRA (Quantized Low-Rank Adaptation) approach was used for efficient fine-tuning, with an optimized process configured using Axolotl.
Training Data
V6 was fine-tuned on a completely new dataset of 3,200 high-quality examples with several key improvements:
- Canon-Centric Scenarios: All roleplay scenarios are now based on pivotal events from the Wings of Fire book series, exploring "what-if" outcomes. (e.g., What if Darkstalker didn't kill Arctic at that moment?). This ensures deep and lore-consistent interactions.
- Canon-Only Characters: The model was trained exclusively on canon characters from the books. AI-generated characters have been removed from the training data (except for the user's persona), leading to more authentic character portrayals.
- Dungeon Master (DM) Style Questions: A new feature has been integrated where the model can act as a Dungeon Master, prompting the user with multiple-choice actions to drive the story forward. For example:
You arrive in front of Queen Scarlet. What do you do? A)... B)... C)...
- Improved Data Cleaning: The dataset underwent a rigorous cleaning process to remove formatting artifacts from previous versions, such as
**scene transitions**
, resulting in a cleaner and more natural narrative style.
Intended Use & Limitations
- Intended Use: The primary purpose of this model is for creative and roleplaying within the Wings of Fire universe. However, user feedback indicates it is also highly effective for general-purpose roleplaying.
- Limitations & Quirks:
- Performance on tasks outside of its training domain (general knowledge, coding, etc.) is not guaranteed and will likely be poor.
- Versatility: While it appears to be only a Wings of Fire tuned model, users have reported it is very capable of performing normal roleplay with other settings and characters.
- The model may "hallucinate" or generate plausible but non-canonical information, especially when pushed outside the established "what-if" scenarios.
- Content: The training data includes mature and darker themes from the Wings of Fire series, such as conflict, character death, and moral ambiguity. The model is capable of generating content reflecting these themes. As always, it is up to the user what they do with it.
- Formatting: Training data was cleaned to remove narrative artifacts like
**scene transitions**
. The model should now produce cleaner prose. - Safety: This model has not undergone additional safety alignment beyond what was included in its base Llama 3.3 model. Standard responsible AI practices should be followed.
Recommended Sampler Settings
For optimal performance that balances creativity and coherence, the following default sampler settings are recommended.
Acknowledgements
- Credit to Meta for the powerful Llama 3.3 architecture.
- A special thank you to @Quͫaͦcͦk for providing the NVIDIA RTX PRO 6000 Blackwell GPU that made this training possible.
- Credit to Google for the Gemini Pro model, used in dataset generation.
- Credit to Evan Armstrong for Augmentoolkit, an invaluable tool for dataset creation.
- Downloads last month
- 6