ποΈ Today I'm introducing a method to train static embedding models that run 100x to 400x faster on CPU than common embedding models, while retaining 85%+ of the quality! Including 2 fully open models: training scripts, datasets, metrics.
We apply our recipe to train 2 Static Embedding models that we release today! We release: 2οΈβ£ an English Retrieval model and a general-purpose Multilingual similarity model (e.g. classification, clustering, etc.), both Apache 2.0 π§ my modern training strategy: ideation -> dataset choice -> implementation -> evaluation π my training scripts, using the Sentence Transformers library π my Weights & Biases reports with losses & metrics π my list of 30 training and 13 evaluation datasets
The 2 Static Embedding models have the following properties: ποΈ Extremely fast, e.g. 107500 sentences per second on a consumer CPU, compared to 270 for 'all-mpnet-base-v2' and 56 for 'gte-large-en-v1.5' 0οΈβ£ Zero active parameters: No Transformer blocks, no attention, not even a matrix multiplication. Super speed! π No maximum sequence length! Embed texts at any length (note: longer texts may embed worse) π Linear instead of exponential complexity: 2x longer text takes 2x longer, instead of 2.5x or more. πͺ Matryoshka support: allow you to truncate embeddings with minimal performance loss (e.g. 4x smaller with a 0.56% perf. decrease for English Similarity tasks)
The blogpost contains a lengthy list of possible advancements; I'm very confident that our 2 models are only the tip of the iceberg, and we may be able to get even better performance.