boltmonkey_shortreasoning‑8b

Merged QLoRA adapter plus base weights.
Fine‑tuned for short‑form chain‑of‑thought reasoning.

  • Base model: SuperNeuralDreadDevil-8b
  • Dataset: Cosmopedia‑Instruct 60k (ShareGPT style)
  • Context length: 1096 tokens
  • Training: 4 epochs, LoRA r = 32, α = 16, dropout 0.05, fp16, 4‑bit quant

See train_args.json for the full Axolotl config.

Downloads last month
208
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BoltMonkey/boltmonkey_shortreasoning-8b

Finetuned
(1)
this model
Quantizations
1 model