Built with Axolotl

Meta-Llama-3.1-8B-Adventure-QLoRA

This LoRA is trained on Llama 3.1 8B base using completion format.

The datasets used were:

  • Spring Dragon
  • Skein

This is not an instruct model and no instruct format was used.

The intended use is with text completion where user input is given with > User Input. This is the default for Kobold Lite Adventure mode.

If merged into an instruct model, it should impart the flavor of the text adventure data. Use whatever the instruct model's format is for instruct.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine_with_min_lr
  • lr_scheduler_warmup_steps: 20
  • num_epochs: 1

Framework versions

  • PEFT 0.12.0
  • Transformers 4.45.0.dev0
  • Pytorch 2.3.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
4
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for ToastyPigeon/Meta-Llama-3.1-8B-Adventure-QLoRA

Adapter
(273)
this model

Collection including ToastyPigeon/Meta-Llama-3.1-8B-Adventure-QLoRA