Edit model card

Built with Axolotl

LoneStriker/Yi-34B-Spicyboros-3.1-LoRA

This model was fine-tuned on Spicyboros-3.1 dataset on top of the Yi-34B-Llama model.

Model description

Base model Yi-34B-Llama

Intended uses & limitations

See Yi license

Training and evaluation data

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • total_train_batch_size: 4
  • total_eval_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • num_epochs: 1

Training results

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train LoneStriker/Yi-34B-Spicyboros-3.1-LoRA