FLAN-T5 Small - Sentiment Analysis

Fine-tuned version of google/flan-t5-small for sentiment analysis on IMDB reviews.

Model Details

  • Base Model: google/flan-t5-small
  • Task: Binary sentiment classification (positive/negative)
  • Dataset: IMDB movie reviews (300 training samples)
  • Accuracy: 85.00%

Usage

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("usef310/flan-t5-small-sentiment")
model = AutoModelForSeq2SeqLM.from_pretrained("usef310/flan-t5-small-sentiment")

text = "This movie was amazing!"
inputs = tokenizer("sentiment: " + text, return_tensors="pt")
outputs = model.generate(**inputs)
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(prediction)  # Output: positive or negative

Training Details

  • Epochs: 3
  • Batch size: 4 (with gradient accumulation)
  • Learning rate: 5e-5
  • Optimizer: AdamW
Downloads last month
18
Safetensors
Model size
77M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train usef310/flan-t5-small-sentiment

Space using usef310/flan-t5-small-sentiment 1