πŸ“ Model Card: gptneo-imdb-finetuned

πŸ” Introduction

The wakaflocka17/gptneo-imdb-finetuned model is a fine-tuned version of EleutherAI/gpt-neo-2.7B for the sentiment classification task on the IMDb dataset. Trained on movie reviews, it can distinguish between positive and negative sentiment. Below you will find its model card, evaluation metrics, training parameters, and a practical example of its use in Google Colab.

πŸ“Š Evaluation Metrics

Metric Value
Accuracy 0.8412
Precision 0.8538
Recall 0.8234
F1-score 0.8384

βš™οΈ Training Parameters

Parameter Values
Base model EleutherAI/gpt-neo-2.7B
Repo pretrained EleutherAI/gpt-neo-2.7B
Repo finetuned models/gpt_neo_2_7b
Repo downloaded models/downloaded/gpt_neo_2_7b
Epochs 1
Batch size (train) 1
Batch size (eval) 1
Labels number 2

πŸš€ Example of use in Colab

Installing dependencies

!pip install --upgrade transformers huggingface_hub

(Optional) Authentication for private models

from huggingface_hub import login
login(token="hf_yourhftoken")

Loading tokenizer and model

from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline

repo_id   = "wakaflocka17/gptneo-imdb-finetuned"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model     = AutoModelForSequenceClassification.from_pretrained(repo_id)

# Override default labels
model.config.id2label = {0: 'NEGATIVE', 1: 'POSITIVE'}
model.config.label2id = {'NEGATIVE': 0, 'POSITIVE': 1}

# Create the classification pipeline
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)

Inference on a text example

testo     = "This movie was absolutely fantasticβ€”wonderful performances and a gripping story!"
risultati = pipe(testo)
print(risultati)
# Esempio di output:
# [{'label': 'POSITIVE', 'score': 0.95}, {'label': 'NEGATIVE', 'score': 0.05}]

πŸ“– How to cite

If you use this model in your work, you can cite it as:

@misc{Sentiment-Project,
  author       = {Francesco Congiu},
  title        = {Sentiment Analysis with Pretrained, Fine-tuned and Ensemble Transformer Models},
  howpublished = {\url{https://github.com/wakaflocka17/DLA_LLMSANALYSIS}},
  year         = {2025}
}

πŸ”— Reference Repository

All the file structure and script examples can be found at: https://github.com/wakaflocka17/DLA_LLMSANALYSIS/tree/main

Downloads last month
108
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for wakaflocka17/gptneo-imdb-finetuned

Finetuned
(24)
this model

Dataset used to train wakaflocka17/gptneo-imdb-finetuned

Collection including wakaflocka17/gptneo-imdb-finetuned