πŸ“ Model Card: bert-imdb-finetuned

πŸ” Introduction

The wakaflocka17/bert-imdb-finetuned model is a fine-tuned version of google-bert/bert-base-uncased for the sentiment classification task on the IMDb dataset. Trained on movie reviews, it can distinguish between positive and negative sentiment with excellent accuracy. Below you will find its model card, evaluation metrics, training parameters, and a practical example of its use in Google Colab.

πŸ“Š Evaluation Metrics

Metric Value
Accuracy 0.8734
Precision 0.8661
Recall 0.8834
F1-score 0.8746

βš™οΈ Training Parameters

Parameter Values
Modello di base bert-base-uncased
Repo pretrained bert-base-uncased
Repo finetuned models/bert_base_uncased
Repo downloaded models/downloaded/bert_base_uncased
Epochs 3
Batch size (train) 16
Batch size (eval) 32
Numero di label 2

πŸš€ Example of use in Colab

Installing dependencies

!pip install --upgrade transformers huggingface_hub

(Optional) Authentication for private models.

from huggingface_hub import login
login(token="hf_yourhftoken")

Loading tokenizer and model

from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline

repo_id   = "wakaflocka17/bert-imdb-finetuned"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model     = AutoModelForSequenceClassification.from_pretrained(repo_id)

# Override default labels
model.config.id2label = {0: 'NEGATIVE', 1: 'POSITIVE'}
model.config.label2id = {'NEGATIVE': 0, 'POSITIVE': 1}

# Create the classification pipeline
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)

Inference on a text example

testo     = "This movie was absolutely fantasticβ€”wonderful performances and a gripping story!"
risultati = pipe(testo)
print(risultati)
# Es. output:
# [{'label': 'POSITIVE', 'score': 0.95}, {'label': 'NEGATIVE', 'score': 0.05}]

πŸ“– How to cite

If you use this model in your work, you can cite it as:

@misc{Sentiment-Project,
  author       = {Francesco Congiu},
  title        = {Sentiment Analysis with Pretrained, Fine-tuned and Ensemble Transformer Models},
  howpublished = {\url{https://github.com/wakaflocka17/DLA_LLMSANALYSIS}},
  year         = {2025}
}

πŸ”— Reference Repository

All the file structure and script examples can be found at: https://github.com/wakaflocka17/DLA_LLMSANALYSIS/tree/main

Downloads last month
46
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for wakaflocka17/bert-imdb-finetuned

Finetuned
(5292)
this model

Dataset used to train wakaflocka17/bert-imdb-finetuned

Collection including wakaflocka17/bert-imdb-finetuned