Edit model card

Usage

from transformers import pipeline     

# load model and tokenizer from huggingface hub with pipeline
enhancer = pipeline("summarization", model="gokaygokay/Lamini-Prompt-Enchance-Long", device=0)

prompt = "A blue-tinted bedroom scene, surreal and serene, with a mysterious reflected interior."
prefix = "Enhance the description: "
# enhance prompt
res = enhancer(prefix + prompt)

print(res[0]['summary_text'])
 

Lamini-Prompt-Enchance-Long

This model is a fine-tuned version of MBZUAI/LaMini-Flan-T5-248M on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1624
  • Rouge1: 20.2443
  • Rouge2: 9.3642
  • Rougel: 17.2484
  • Rougelsum: 19.0703
  • Gen Len: 19.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.4435 1.0 2014 2.2723 20.0108 9.2736 17.0569 18.8171 19.0
2.341 2.0 4028 2.2120 20.4422 9.4473 17.4347 19.2234 19.0
2.2948 3.0 6042 2.1820 20.5645 9.5426 17.5419 19.3714 19.0
2.2598 4.0 8056 2.1668 20.2354 9.3639 17.2379 19.0625 19.0
2.2431 5.0 10070 2.1624 20.2443 9.3642 17.2484 19.0703 19.0

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
699
Safetensors
Model size
248M params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for gokaygokay/Lamini-Prompt-Enchance-Long

Finetuned
(4)
this model

Spaces using gokaygokay/Lamini-Prompt-Enchance-Long 24