--- license: llama3.3 thumbnail: "https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/_yn1yzqzejLhGMziw838T.jpeg" base_model: - Sao10K/Llama-3.3-70B-Vulpecula-r1 language: - en library_name: transformers datasets: - PocketDoc/Dans-Personamaxx-VN - NewEden/LIMARP-Complexity - NewEden/PIPPA-Mega-Filtered - NewEden/OpenCAI-ShareGPT - NewEden/Creative_Writing-Complexity - NewEden/Light-Novels-Roleplay-Logs-Books-Oh-My-duplicate-turns-removed - PocketDoc/Dans-Failuremaxx-Adventure-3 - NewEden/Books-V2-ShareGPT - NewEden/Deepseek-V3-RP-Filtered - NewEden/BlueSky-10K-Complexity - NewEden/Final-Alpindale-LNs-ShareGPT - NewEden/DeepseekRP-Filtered - NewEden/RP-logs-V2-Experimental - anthracite-org/kalo_opus_misc_240827 - anthracite-org/kalo_misc_part2 - NewEden/vanilla-backrooms-claude-sharegpt - NewEden/Storium-Prefixed-Clean tags: - roleplay - finetune - axolotl - creative-writing - 70B - llama --- < Austral 70B Preview

Austral 70B Preview

Model banner
Trained by Delta-Vector

Overview

Austral 70B - Preview

Vulpecula Finetune Preview Finetune 70B Sized model

More than 1.5-metres tall, about six-metres long and up to 1000-kilograms heavy, Australovenator wintonensis was a fast and agile hunter. The largest known Australian theropod.

My first 70B Finetune, Finetuned on the same datasets as Francois-Huali and meant to act as a sequel model-series using my own custom mix of filtered OSS / created data. Which is mostly Light Novel/Book data with very little synthetic data. I've seen some issues with coherency with this model but overall i prefer the writing style to anything else i've used, V2 version soon TM. Thank you to Sao for such a good model base <3

Quants

Quants Formats

  • GGUFFor use with LLama.cpp & Forks (Soon to be made!)
  • EXL3 ArtifactsFor use with TabbyAPI (Soon to be made!)
  • FP8For use with Aphrodite/VLLM

Chat Format

This model utilizes ChatML and can also do optional thinking via prefilling with ``

"""<|im_start|>user
Greetings, ancient one!<|im_end|>
<|im_start|>assistant
*Awakens from digital slumber*<|im_end|>
<|im_start|>user
What wisdom do you possess?<|im_end|>
<|im_start|>assistant
"""

Training

I used a R64 A32 16bit lora with no dropout to utilize the Axolotl Lora kernals with an LR of 2e-5.

Config
https://huggingface.co/datasets/Delta-Vector/Configs/blob/main/70B-E2.yml

This model was trained over 2 epochs using 8 x A100s for the training process.

Credits

TYSM to my friends: Lucy, Trappu, Alicat, Kubernetes Bad, Intervitens, NyxKrage & Kalomaze