Harsh-Gupta's picture
Update README.md
b24cd9c verified
metadata
library_name: transformers
license: mit
tags:
  - sentiment-analysis
  - bert
  - lora
  - peft
  - huggingface
  - transformers
  - text-classification
  - low-resource
model-index:
  - name: LoRA-BERT for Sentiment Analysis (SST-2)
    results:
      - task:
          type: text-classification
          name: Sentiment Analysis
        dataset:
          type: glue
          name: SST2
        metrics:
          - type: accuracy
            value: 0.9117
            name: Accuracy
datasets:
  - stanfordnlp/sst2
language:
  - en
metrics:
  - accuracy
base_model:
  - google-bert/bert-base-uncased
pipeline_tag: text-classification

πŸ€– LoRA-BERT for Sentiment Analysis (SST-2)

This is a lightweight, parameter-efficient BERT model fine-tuned with LoRA (Low-Rank Adaptation) for binary sentiment classification on the SST-2 dataset.


πŸ’‘ Model Highlights

  • βœ… Fine-tuned using LoRA (r=8, Ξ±=16) on top of bert-base-uncased
  • βœ… Trained on SST2
  • βœ… Achieves ~91.17% validation accuracy
  • βœ… Lightweight: only LoRA adapter weights are updated

πŸ“Š Results

Epoch Training Loss Validation Loss Accuracy
1 0.3030 0.2467 89.91%
2 0.1972 0.2424 90.94%
3 0.2083 0.2395 91.17%
4 0.1936 0.2464 90.94%
5 0.1914 0.2491 90.83%

Early stopping could be applied from Epoch 3 based on validation metrics.


πŸ› οΈ Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel, PeftConfig

model_id = "Harsh-Gupta/bert-lora-sentiment"

# Load PEFT config + model
config = PeftConfig.from_pretrained(model_id)
base_model = AutoModelForSequenceClassification.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(base_model, model_id)

# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)

# Predict
text = "This movie was absolutely amazing!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
    outputs = model(**inputs)
    probs = outputs.logits.softmax(dim=-1)
    pred = probs.argmax().item()

LoRA Configuration

LoraConfig(
    r=32,
    lora_alpha=4,
    target_modules=["query", "value"],
    lora_dropout=0.1,
    bias="none",
    task_type="SEQ_CLS"
)

πŸ” Intended Use

  • Sentiment classification for binary text (positive/negative)

  • Can be adapted to other domains: movie reviews, product reviews, tweets


🧠 Author

  • Harsh Gupta
  • MCA, Jawaharlal Nehru University (JNU)
  • GitHub: 2003Harsh