Model Card for Sentiment Analysis Model
Model Details
Model Description
his model is a fine-tuned version of distilbert-base-uncased
for binary sentiment analysis.
It is designed to classify text as either positive or negative sentiment.
- Developed by: HAMZA JR
- Model type: Transformer-based sentiment classifier
- Trained on: A labeled dataset for sentiment analysis (IMDB dataset)
- Library used:
transformers
,peft
- License: Apache 2.0
- Finetuned from model :
distilbert-base-uncased
Performance: Accuracy Comparison
Model | Accuracy |
---|---|
Baseline (Pretrained) | 47.90% |
Fine-Tuned Model | 87.50% |
The fine-tuning process significantly improved accuracy from 47.90% to 87.50%, making the model much better for sentiment classification.
Uses
How to Get Started with the Model
Use the code below to classify text using the fine-tuned model:
from transformers import pipeline
def classify_text(model_name, text):
"""
Classifies text using a Hugging Face model and maps numeric labels to meaningful labels.
Args:
model_name (str): The name of the Hugging Face model.
text (str): The input text to classify.
Returns:
dict: A dictionary with the predicted label and confidence score.
"""
# Define custom label mapping
label_map = {0: "negative", 1: "positive"}
# Load the pipeline
classifier = pipeline("text-classification", model=model_name)
# Get the prediction
prediction = classifier(text)
# Convert label ID to meaningful text
for pred in prediction:
pred["label"] = label_map[int(pred["label"].split("_")[-1])] # Extract and map the label
return prediction
# Example Usage:
model_name = "Jr0hamza/sentiment-analysis-model"
text = "I absolutely loved this movie! It was fantastic."
result = classify_text(model_name, text)
print(result)
- PEFT 0.14.0
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Jr0hamza/sentiment-analysis-model
Base model
distilbert/distilbert-base-uncased