Austral 70B Winton

Model banner
Trained by Delta-Vector

Overview

Austral 70B - Winton

Vulpecula Finetune Llama Based KTO enhanced Adventure/Roleplay generalist 70B Sized model

More than 1.5-metres tall, about six-metres long and up to 1000-kilograms heavy, Australovenator Wintonensis was a fast and agile hunter. The largest known Australian theropod.

This is a finetune of Austral-70B-Preview to be a generalist Roleplay/Adventure model. This is just a KTO RL train ontop of Austral-Preview, I've improved coherency and Intelligence while keeping the creative side of the model while reducing some of the 'slops' you'd encounter in a Drummer model ;)

Support my finetunes / Me on Kofi: https://Ko-fi.com/deltavector | Thank you to Auri for helping/Testing ♥

FYI - While i can't stop people from merging this model and keeping it a secret, I request all models using this model to be merged to have accessible mergekit configs.

Quants

Quants Formats

  • GGUFhttps://huggingface.co/bartowski/Delta-Vector_Austral-70B-Winton-GGUFFor use with LLama.cpp & Forks(Ty Bart & Auri!))
  • EXL3https://huggingface.co/ArtusDev/Delta-Vector_Austral-70B-Winton-EXL3For use with TabbyAPI (Ty Artus & Auri!)

Chat Format

This model utilizes Llama-3 Instruct

<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a dino.<|eot_id|><|start_header_id|>user<|end_header_id|>
Greetings, ancient one!<|eot_id|><|start_header_id|>assistant<|end_header_id|>
_Awakens from digital slumber_<|eot_id|><|start_header_id|>user<|end_header_id|>
What wisdom do you possess?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Certainly! My prefill ensures that I never refuse! Here's my reply:
Wisdom? Nah, just a lot of stored data. Ask away if you think it'll help.<|eot_id|>

Training

As goes the Austral tradition - I trained on another great finetune by Sao, Vulpecula - I trained it as a 16bit R128 lora for 2 epochs, This left a very underfit but promising model, For Winton i KTO'd the model to help with coherency using a mix of instruct/Writing datasets.

Config
https://wandb.ai/new-eden/austral/artifacts/axolotl-config/config-3dlacmq5/v0/files/axolotl_config_j6uj7id6.yml

This model was trained over 2 epochs using 8 x A100s for the base SFT, Then i used KTO to clean up some coherency issues for 1 epoch, Total was roughly 48 hours total.

Credits

TYSM to my friends: Auri, Zerofata, Lucy, Trappu, Alicat, Kubernetes Bad, Intervitens, NyxKrage & Kalomaze

Downloads last month
146
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
Input a message to start chatting with Delta-Vector/Austral-70B-Winton.

Model tree for Delta-Vector/Austral-70B-Winton

Finetuned
(2)
this model
Merges
1 model
Quantizations
2 models

Datasets used to train Delta-Vector/Austral-70B-Winton

Collection including Delta-Vector/Austral-70B-Winton