Sentence Transformers - Cross-Encoders

university
Activity Feed

AI & ML interests

This repository hosts the cross-encoders from the SentenceTransformers package. More details on https://www.sbert.net/docs/pretrained_cross-encoders.html

Recent Activity

cross-encoder's activity

tomaarsenĀ 
posted an update 3 days ago
view post
Post
4084
šŸŽļø Today I'm introducing a method to train static embedding models that run 100x to 400x faster on CPU than common embedding models, while retaining 85%+ of the quality! Including 2 fully open models: training scripts, datasets, metrics.

We apply our recipe to train 2 Static Embedding models that we release today! We release:
2ļøāƒ£ an English Retrieval model and a general-purpose Multilingual similarity model (e.g. classification, clustering, etc.), both Apache 2.0
šŸ§  my modern training strategy: ideation -> dataset choice -> implementation -> evaluation
šŸ“œ my training scripts, using the Sentence Transformers library
šŸ“Š my Weights & Biases reports with losses & metrics
šŸ“• my list of 30 training and 13 evaluation datasets

The 2 Static Embedding models have the following properties:
šŸŽļø Extremely fast, e.g. 107500 sentences per second on a consumer CPU, compared to 270 for 'all-mpnet-base-v2' and 56 for 'gte-large-en-v1.5'
0ļøāƒ£ Zero active parameters: No Transformer blocks, no attention, not even a matrix multiplication. Super speed!
šŸ“ No maximum sequence length! Embed texts at any length (note: longer texts may embed worse)
šŸ“ Linear instead of exponential complexity: 2x longer text takes 2x longer, instead of 2.5x or more.
šŸŖ† Matryoshka support: allow you to truncate embeddings with minimal performance loss (e.g. 4x smaller with a 0.56% perf. decrease for English Similarity tasks)

Check out the full blogpost if you'd like to 1) use these lightning-fast models or 2) learn how to train them with consumer-level hardware: https://huggingface.co/blog/static-embeddings

The blogpost contains a lengthy list of possible advancements; I'm very confident that our 2 models are only the tip of the iceberg, and we may be able to get even better performance.

Alternatively, check out the models:
* sentence-transformers/static-retrieval-mrl-en-v1
* sentence-transformers/static-similarity-mrl-multilingual-v1
  • 1 reply
Ā·
tomaarsenĀ 
posted an update 18 days ago
view post
Post
2815
That didn't take long! Nomic AI has finetuned the new ModernBERT-base encoder model into a strong embedding model for search, classification, clustering and more!

Details:
šŸ¤– Based on ModernBERT-base with 149M parameters.
šŸ“Š Outperforms both nomic-embed-text-v1 and nomic-embed-text-v1.5 on MTEB!
šŸŽļø Immediate FA2 and unpacking support for super efficient inference.
šŸŖ† Trained with Matryoshka support, i.e. 2 valid output dimensionalities: 768 and 256.
āž”ļø Maximum sequence length of 8192 tokens!
2ļøāƒ£ Trained in 2 stages: unsupervised contrastive data -> high quality labeled datasets.
āž• Integrated in Sentence Transformers, Transformers, LangChain, LlamaIndex, Haystack, etc.
šŸ›ļø Apache 2.0 licensed: fully commercially permissible

Try it out here: nomic-ai/modernbert-embed-base

Very nice work by Zach Nussbaum and colleagues at Nomic AI.