flan-t5-base-skill-extraction-lora-ver1

This model is a fine-tuned version of google/flan-t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1084

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
1.9885 0.5051 300 1.5889
1.5829 1.0101 600 1.3672
1.4693 1.5152 900 1.2715
1.4251 2.0202 1200 1.2314
1.3995 2.5253 1500 1.2070
1.3413 3.0303 1800 1.1826
1.3164 3.5354 2100 1.1602
1.2953 4.0404 2400 1.1533
1.3004 4.5455 2700 1.1426
1.2823 5.0505 3000 1.1328
1.263 5.5556 3300 1.1328
1.2396 6.0606 3600 1.1230
1.2127 6.5657 3900 1.1201
1.1944 7.0707 4200 1.1162
1.2369 7.5758 4500 1.1104
1.2336 8.0808 4800 1.1094
1.1903 8.5859 5100 1.1094
1.2043 9.0909 5400 1.1084
1.2145 9.5960 5700 1.1084

Framework versions

  • PEFT 0.10.0
  • Transformers 4.49.0
  • Pytorch 2.4.1+cu118
  • Datasets 4.1.1
  • Tokenizers 0.21.0
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nguyen10001/flan-t5-base-skill-extraction-lora-ver1

Adapter
(279)
this model