Trilingual Sentiment LoRA Model

Model Summary

Trilingual Sentiment LoRA is a fine-tuned XLM-RoBERTa model optimized for multilingual sentiment classification across Arabic, English, and Spanish texts.
It applies Low-Rank Adaptation (LoRA) using the PEFT framework, enabling efficient fine-tuning while maintaining robust multilingual understanding.

Model Details

  • Developed by: Osama Naguib
  • Funded by: Ahmed Zaky
  • Model Type: Sequence Classification
  • Languages: Arabic, English, Spanish
  • License: MIT
  • Finetuned From: xlm-roberta-base
  • Frameworks: Transformers, Datasets, PEFT, PyTorch

Model Sources

Intended Uses

Direct Use

This model can be used directly for sentiment classification on multilingual (EN/ES/AR) text data.

Example labels:

  • 0 β†’ Negative
  • 1 β†’ Neutral
  • 2 β†’ Positive

Example use: library_name: transformers tags: [sentiment-analysis, multilingual, lora, xlm-roberta, peft]

Trilingual Sentiment LoRA Model

Model Summary

Trilingual Sentiment LoRA is a fine-tuned XLM-RoBERTa model optimized for multilingual sentiment classification across Arabic, English, and Spanish texts.
It applies Low-Rank Adaptation (LoRA) using the PEFT framework, enabling efficient fine-tuning while maintaining robust multilingual understanding.

Model Details

  • Developed by: Osama Naguib
  • Funded by: Ahmed Zaky
  • Model Type: Sequence Classification
  • Languages: Arabic, English, Spanish
  • License: MIT
  • Finetuned From: xlm-roberta-base
  • Frameworks: Transformers, Datasets, PEFT, PyTorch

Model Sources

Intended Uses

Direct Use

This model can be used directly for sentiment classification on multilingual (EN/ES/AR) text data.

Example labels:

  • 0 β†’ Negative
  • 1 β†’ Neutral
  • 2 β†’ Positive

Example use:

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support