Papers
arxiv:2505.24581

GATE: General Arabic Text Embedding for Enhanced Semantic Textual Similarity with Matryoshka Representation Learning and Hybrid Loss Training

Published on May 30
Β· Submitted by Omartificial-Intelligence-Space on Jun 2
Authors:
,
,
,
,

Abstract

GATE models using Matryoshka Representation Learning and a hybrid loss approach achieve state-of-the-art performance on Arabic Semantic Textual Similarity benchmarks.

AI-generated summary

Semantic textual similarity (STS) is a critical task in natural language processing (NLP), enabling applications in retrieval, clustering, and understanding semantic relationships between texts. However, research in this area for the Arabic language remains limited due to the lack of high-quality datasets and pre-trained models. This scarcity of resources has restricted the accurate evaluation and advance of semantic similarity in Arabic text. This paper introduces General Arabic Text Embedding (GATE) models that achieve state-of-the-art performance on the Semantic Textual Similarity task within the MTEB benchmark. GATE leverages Matryoshka Representation Learning and a hybrid loss training approach with Arabic triplet datasets for Natural Language Inference, which are essential for enhancing model performance in tasks that demand fine-grained semantic understanding. GATE outperforms larger models, including OpenAI, with a 20-25% performance improvement on STS benchmarks, effectively capturing the unique semantic nuances of Arabic.

Community

Paper author Paper submitter

🧠 Arabic Matryoshka Embedding Models Collection

Welcome to the official Arabic Matryoshka Embedding Models collection!
This collection showcases a series of cutting-edge Arabic text embedding models built using:

  • πŸͺ† Matryoshka Representation Learning
  • βš™οΈ Hybrid Loss Multi-task Training
  • πŸ” Arabic Triplet and NLI datasets

These models are designed to capture fine-grained semantic similarity in Arabic while being efficient, scalable, and resource-friendly.


πŸ“Œ What's Inside?

  • βœ… State-of-the-art performance on Arabic STS benchmarks (MTEB: STS17, STS22, STS22-v2)
  • βœ… Multi-dimensional embeddings (768, 512, 256, 128, 64)
  • βœ… Models outperforming much larger LLMs like OpenAI and Mistral-7B on Arabic tasks
  • βœ… Trained with contrastive triplet learning, softmax classification, and cosine similarity loss
  • βœ… Includes adaptations of AraBERT, MARBERT, LaBSE, and E5 within the Matryoshka framework

πŸš€ Highlights from Our Research (GATE Paper)

πŸ“° Paper Title:
GATE: General Arabic Text Embedding for Enhanced Semantic Textual Similarity with Matryoshka Representation Learning and Hybrid Loss Training

πŸ“„ Read on arXiv:
https://arxiv.org/abs/2505.24581

πŸ“Š Key Achievements:

  • Up to +25% improvement over OpenAI embeddings on Arabic STS
  • Models with only 135M parameters beating billion-parameter LLMs
  • Maintains high performance even at reduced dimensions (64d!)
  • First large-scale benchmark of Arabic triplet-based contrastive embeddings

πŸ”₯ Top Models (So Far)

Model Name Base Type STS Avg Score
Arabic-Triplet-Matryoshka-V2 AraBERT Triplet + MRL 69.99
GATE-AraBERT-V1 AraBERT Hybrid Loss + MRL 68.54
Arabic-LabSE-Matryoshka LaBSE Triplet + MRL 66.76
Marbert-AllNLI-Triplet-Matryoshka MARBERT Dialect-Aware 67.19
E5-AllNLI-Triplet-Matryoshka multilingual-E5 Cross-lingual 65.45

πŸ“¦ Collection Link

πŸ”— Explore all models:
πŸ‘‰ Arabic Matryoshka Embedding Models Collection


πŸ§ͺ Use Cases

  • Arabic Semantic Search
  • Duplicate Question Detection
  • Clustering & Retrieval
  • Arabic Text Understanding Tasks
  • Scalable NLP for low-resource environments

πŸ› οΈ Training Details

  • Hardware: NVIDIA A100 GPUs
  • Framework: πŸ€— sentence-transformers, custom SentenceTransformerTrainer
  • Datasets: Arabic Triplet-NLI, STS pairs, Classification datasets
  • Training Losses: MultipleNegativesRankingLoss, CoSentLoss, SoftmaxLoss, MatryoshkaLoss
  • Dimensions: Trained with [768, 512, 256, 128, 64]

πŸ‘‹ Contributions & Feedback

We welcome feedback, benchmarks, and contributions!
If you’ve fine-tuned one of these models or tested them on new Arabic datasets, let us know!

πŸ“§ Contact: [email protected]


Let’s make Arabic NLP faster, smarter, and more accessible β€” one embedding at a time. 🌍

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.24581 in a dataset README.md to link it from this page.

Spaces citing this paper 13

Collections including this paper 1