DistilBERT Sentiment Classifier (IMDb) β saibapanku/distilbert-sentiment
This is a fine-tuned DistilBERT model for binary sentiment classification trained on the IMDb dataset. The model classifies movie reviews as either positive or negative.
Model Details
- Model name:
saibapanku/distilbert-sentiment
- Base model:
distilbert-base-uncased
- Task: Sequence Classification (Sentiment Analysis)
- Dataset: IMDb
- Labels:
0
: Negative1
: Positive
How to Use
You can load and use the model directly with π€ Transformers:
from transformers import pipeline
classifier = pipeline("text-classification", model="saibapanku/distilbert-sentiment")
print(classifier("This movie was absolutely amazing!"))
Training Configuration
- Training method: Hugging Face Trainer
- Epochs: 3
- Batch size: 16
- Max sequence length: 256 tokens
- Learning rate: default
- Weight decay: 0.01
- Evaluation strategy: per epoch
- Metric used: Accuracy
- Subset used: 2,000 train / 1,000 test samples (for demo purposes)
Example Output: [{'label': 'positive', 'score': 0.9843}]
Limitations
This model was trained on a small subset of the IMDb dataset and may not generalize well to all types of reviews.
Performance on domain-specific or multi-lingual content is not guaranteed.
License
This model is distributed under the MIT License.
Feel free to fine-tune further or adapt it for your specific sentiment analysis tasks!
- Downloads last month
- 19
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for saibapanku/distilbert-sentiment
Base model
distilbert/distilbert-base-uncased