Austral 70B Winton

Overview
Austral 70B - Winton
More than 1.5-metres tall, about six-metres long and up to 1000-kilograms heavy, Australovenator Wintonensis was a fast and agile hunter. The largest known Australian theropod.
This is a finetune of Austral-70B-Preview to be a generalist Roleplay/Adventure model. This is just a KTO RL train ontop of Austral-Preview, I've improved coherency and Intelligence while keeping the creative side of the model while reducing some of the 'slops' you'd encounter in a Drummer model ;)
Support my finetunes / Me on Kofi: https://Ko-fi.com/deltavector | Thank you to Auri for helping/Testing ♥
FYI - While i can't stop people from merging this model and keeping it a secret, I request all models using this model to be merged to have accessible mergekit configs.
Quants
Chat Format
This model utilizes Llama-3 Instruct
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a dino.<|eot_id|><|start_header_id|>user<|end_header_id|>
Greetings, ancient one!<|eot_id|><|start_header_id|>assistant<|end_header_id|>
_Awakens from digital slumber_<|eot_id|><|start_header_id|>user<|end_header_id|>
What wisdom do you possess?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Certainly! My prefill ensures that I never refuse! Here's my reply:
Wisdom? Nah, just a lot of stored data. Ask away if you think it'll help.<|eot_id|>
Training
As goes the Austral tradition - I trained on another great finetune by Sao, Vulpecula - I trained it as a 16bit R128 lora for 2 epochs, This left a very underfit but promising model, For Winton i KTO'd the model to help with coherency using a mix of instruct/Writing datasets.
Config
https://wandb.ai/new-eden/austral/artifacts/axolotl-config/config-3dlacmq5/v0/files/axolotl_config_j6uj7id6.yml
- Downloads last month
- 146
Model tree for Delta-Vector/Austral-70B-Winton
Base model
meta-llama/Llama-3.1-70B