SFT checkpoint for GLM4, use the -winton downstream model.

wandb: https://wandb.ai/new-eden/Austral-32B/runs/a27tkw8n/artifacts

datasets:

- Delta-Vector/Orion-LN-V1-ShareGPT
- Delta-Vector/Orion-Personamaxx-RP
- Delta-Vector/Orion-Co-Writer-51K
- Delta-Vector/Orion-Praxis-Co-Writer
- Delta-Vector/Orion-Shoujo-AI-Filtered-ShareGPT
- Delta-Vector/Orion-PIPPA-Cleaned-V2
- Delta-Vector/Orion-Alpindale-LN-ShareGPT
- Delta-Vector/Orion-Deepseek-V3-RP-Filtered
- Delta-Vector/Orion-Books-V2-ShareGPT
- Delta-Vector/Orion-Light-Novels-Roleplay-Logs-Books-Oh-My-duplicate-turns-removed
- Delta-Vector/Orion-RP-Guild
- Delta-Vector/Orion-Creative_Writing-Complexity
- Delta-Vector/Orion-Deepseek-R1-RP-Filtered
- Delta-Vector/Orion-Storium-Prefixed-Clean
- Delta-Vector/Orion-Misc-Sharegpt-Prefixed
- Delta-Vector/Orion-LIMARP-Complexity
- Delta-Vector/Orion-BlueSky-10K-Complexity
- Delta-Vector/Orion-OpenCAI-ShareGPT
- Delta-Vector/Orion-Roleplay-Logs-Sharegpt-Ngram-cleaned
- Delta-Vector/Orion-vanilla-backrooms-claude-sharegpt
Downloads last month
4
Safetensors
Model size
32.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Delta-Vector/Austral-GLM4-SFT

Finetuned
(1)
this model
Finetunes
1 model
Quantizations
2 models

Datasets used to train Delta-Vector/Austral-GLM4-SFT