samsum_42

This model is a fine-tuned version of google/t5-v1_1-large on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4591
  • Rouge1: 50.8056
  • Rouge2: 27.2825
  • Rougel: 42.3835
  • Rougelsum: 47.2015
  • Gen Len: 26.3814

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 921 1.7107 49.4859 25.6708 40.6695 45.4657 29.0733
3.053 2.0 1842 1.4591 50.8056 27.2825 42.3835 47.2015 26.3814
1.8078 3.0 2763 1.4260 47.4424 24.5737 38.5731 43.77 24.5098
1.6115 4.0 3684 1.4194 47.6099 24.4291 38.6691 43.651 24.7213
1.5189 5.0 4605 1.4182 47.9053 25.0788 39.0674 43.9306 25.1626

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.15.2
Downloads last month
1
Safetensors
Model size
0.8B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jialicheng/samsum_t5-large

Finetuned
(11)
this model

Evaluation results