🧠 Sentiment Analysis Model β€” DistilBERT Fine-Tuned on IMDb 🎬

This model is a fine-tuned version of distilbert-base-uncased on the IMDb movie review dataset for binary sentiment classification (positive/negative). It was trained using Hugging Face Transformers and PyTorch.

πŸ” Intended Use

This model is designed to classify movie reviews (or other English text) as positive or negative sentiment. It's ideal for:

  • Opinion mining
  • Social media analysis
  • Review classification
  • Text classification demos

πŸ§ͺ Example Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model_name = "bmdavis/my-language-model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

text = "This movie was amazing and really well-acted!"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
prediction = torch.argmax(outputs.logits).item()

print("Sentiment:", "Positive" if prediction == 1 else "Negative")

πŸ“Š Dataset
IMDb Dataset

25,000 training samples

25,000 test samples

Labels: 0 = Negative, 1 = Positive

🧠 Model Details
Base Model: distilbert-base-uncased

Architecture: Transformer (BERT-like)

Framework: PyTorch

Tokenizer: WordPiece

πŸ› οΈ Training
Epochs: 3

Batch Size: 8

Optimizer: AdamW

Loss: CrossEntropy

Trainer API used

πŸ” License
This model is released under the Apache 2.0 license.

✍️ Author
Created by Brody Davis (@bmdavis)
Trained and uploaded using Hugging Face Hub and Transformers
Downloads last month
1
Safetensors
Model size
67M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train bmdavis/my-language-model

Evaluation results