t5-Flan-Prompt-Enhance
T5-Flan-Prompt-Enhance is a fine-tuned model based on Flan-T5-Small, specifically designed to enhance prompts, captions, and annotations. This means the model has been further trained to improve the quality, clarity, and richness of textual inputs, making them more detailed and expressive.
Key Features:
- Prompt Expansion โ Takes short or vague prompts and enriches them with more context, depth, and specificity.
- Caption Enhancement โ Improves captions by adding more descriptive details, making them more informative and engaging.
- Annotation Refinement โ Enhances annotations by making them clearer, more structured, and contextually relevant.
Run with Transformers
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
# Model checkpoint
model_checkpoint = "prithivMLmods/t5-Flan-Prompt-Enhance"
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
# Model
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
enhancer = pipeline('text2text-generation',
model=model,
tokenizer=tokenizer,
repetition_penalty=1.2,
device=0 if device == "cuda" else -1)
max_target_length = 256
prefix = "enhance prompt: "
short_prompt = "three chimneys on the roof, green trees and shrubs in front of the house"
answer = enhancer(prefix + short_prompt, max_length=max_target_length)
final_answer = answer[0]['generated_text']
print(final_answer)
This fine-tuning process allows T5-Flan-Prompt-Enhance to generate high-quality, well-structured, and contextually relevant outputs, which can be particularly useful for tasks such as text generation, content creation, and AI-assisted writing.
- Downloads last month
- 175
Model tree for prithivMLmods/t5-Flan-Prompt-Enhance
Base model
google/flan-t5-small