GRU Sentiment Analysis Model

This is a fine-tuned GRU model trained for sentiment analysis on the TripAdvisor dataset. The model predicts sentiment scores on a scale of 1 to 5 based on review text.

  • Base Model: Custom one-layer GRU
  • Dataset: nhull/tripadvisor-split-dataset-v2
  • Use Case: Sentiment classification for customer reviews to understand customer satisfaction.
  • Output: Sentiment labels (1–5)

Model Details

  • Embedding: 100-dimensional pre-trained GloVe embeddings
  • Learning Rate: 3e-04
  • Batch Size: 64
  • Epochs: 20 (early stopping with patience = 3)
  • Dropout: 0.3
  • Tokenizer: Custom tokenizer (vocabulary size: 10,000)
  • Framework: TensorFlow/Keras

Intended Use

This model is designed to classify hotel reviews based on their sentiment. It assigns a star rating between 1 and 5 to a review, indicating the sentiment expressed in the review (1 = very bad, 2 = bad, 3 = neutral, 4 = good, 5 = very good).


Dataset

The dataset used for training, validation, and testing is nhull/tripadvisor-split-dataset-v2. It consists of:

  • Training Set: 30,400 reviews
  • Validation Set: 1,600 reviews
  • Test Set: 8,000 reviews

All splits are balanced across five sentiment labels.


Test Performance

Metric Value
Accuracy 0.6216
Precision 0.62
Recall 0.62
F1-Score 0.62

Classification Report (Test Set)

Label Precision Recall F1-Score Support
1 0.71 0.72 0.72 1600
2 0.51 0.57 0.54 1600
3 0.58 0.54 0.56 1600
4 0.59 0.49 0.53 1600
5 0.71 0.79 0.75 1600

Confusion Matrix (Test Set)

True \ Predicted 1 2 3 4 5
1 239 70 9 1 1
2 83 173 60 2 2
3 13 68 188 46 5
4 2 11 68 151 88
5 3 2 13 81 221

Files Included

  • correct_predictions_GRU.csv: Contains correctly classified reviews with their real and predicted labels.
  • misclassified_predictions_GRU.csv: Contains misclassified reviews with their real and predicted labels, along with the difference.
  • glove.6B.100d.txt: Pre-trained 100-dimensional GloVe embeddings used for initializing the embedding layer in the models.

Limitations

  1. Domain-Specific: The model was trained on the TripAdvisor Sentiment Dataset, so it may not generalize to other types of reviews (e.g., Amazon, Yelp) or domains (e.g., tech product reviews) without further fine-tuning.
  2. Subjectivity: Sentiment annotations are subjective and may not fully represent every user's perception, especially for neutral or mixed reviews.
  3. Performance: The model's performance for mid-range sentiment labels (e.g., 2 and 3) is lower compared to extreme sentiment labels (1 and 5), as these tend to have more nuanced language.
  4. Dependency on Pre-trained Embeddings: The models rely on pre-trained GloVe embeddings, meaning their performance is closely tied to the quality and representativeness of these embeddings. Since GloVe embeddings were trained on a large, general corpus, they may not fully capture domain-specific nuances, such as specific phrasing or terms in hotel and restaurant reviews.
  5. GRU Simplicity: While the GRU model offers a simpler and more efficient architecture, it may not capture complex sequential patterns as effectively as more sophisticated models like BiLSTM or LSTM. This simplicity, however, contributes to its strong generalization performance on cross-domain data (e.g., Michelin dataset).
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Dataset used to train arjahojnik/GRU-sentiment-model

Space using arjahojnik/GRU-sentiment-model 1