BiLSTM Sentiment Analysis Model
This is a fine-tuned BiLSTM model trained for sentiment analysis on the TripAdvisor dataset. The model predicts sentiment scores on a scale of 1 to 5 based on review text.
- Base Model: Custom one-layer LSTM
- Dataset:
nhull/tripadvisor-split-dataset-v2
- Use Case: Sentiment classification for customer reviews to understand customer satisfaction.
- Output: Sentiment labels (1–5)
Model Details
- Embedding: 100-dimensional pre-trained GloVe embeddings
- Learning Rate: 3e-04
- Batch Size: 64
- Epochs: 20 (early stopping with patience = 3)
- Dropout: 0.2
- Tokenizer: Custom tokenizer (vocabulary size: 10,000)
- Framework: TensorFlow/Keras
Intended Use
This model is designed to classify hotel reviews based on their sentiment. It assigns a star rating between 1 and 5 to a review, indicating the sentiment expressed in the review (1 = very bad, 2 = bad, 3 = neutral, 4 = good, 5 = very good).
Dataset
The dataset used for training, validation, and testing is nhull/tripadvisor-split-dataset-v2. It consists of:
- Training Set: 30,400 reviews
- Validation Set: 1,600 reviews
- Test Set: 8,000 reviews
All splits are balanced across five sentiment labels.
Test Performance
Metric | Value |
---|---|
Accuracy | 0.6167 |
Precision | 0.62 |
Recall | 0.62 |
F1-Score | 0.62 |
Classification Report (Test Set)
Label | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
1 | 0.73 | 0.71 | 0.72 | 1600 |
2 | 0.53 | 0.47 | 0.50 | 1600 |
3 | 0.52 | 0.60 | 0.56 | 1600 |
4 | 0.56 | 0.55 | 0.55 | 1600 |
5 | 0.74 | 0.75 | 0.74 | 1600 |
Confusion Matrix (Test Set)
True \ Predicted | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|
1 | 251 | 64 | 4 | 1 | 0 |
2 | 106 | 159 | 52 | 2 | 1 |
3 | 17 | 88 | 168 | 37 | 10 |
4 | 3 | 14 | 88 | 128 | 87 |
5 | 5 | 6 | 19 | 66 | 224 |
Files Included
correct_predictions_BiLSTM.csv
: Contains correctly classified reviews with their real and predicted labels.misclassified_predictions_BiLSTM.csv
: Contains misclassified reviews with their real and predicted labels, along with the difference.glove.6B.100d.txt
: Pre-trained 100-dimensional GloVe embeddings used for initializing the embedding layer in the models.
Limitations
- Domain-Specific: The model was trained on the TripAdvisor Sentiment Dataset, so it may not generalize to other types of reviews (e.g., Amazon, Yelp) or domains (e.g., tech product reviews) without further fine-tuning.
- Subjectivity: Sentiment annotations are subjective and may not fully represent every user's perception, especially for neutral or mixed reviews.
- Performance: The model's performance for mid-range sentiment labels (e.g., 2 and 3) is lower compared to extreme sentiment labels (1 and 5), as these tend to have more nuanced language.
- Dependency on Pre-trained Embeddings: The models rely on pre-trained GloVe embeddings, meaning their performance is closely tied to the quality and representativeness of these embeddings. Since GloVe embeddings were trained on a large, general corpus, they may not fully capture domain-specific nuances, such as specific phrasing or terms in hotel and restaurant reviews.
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.