zerank-1-small / README.md
npip99's picture
Add Sentence Transformers library_name (#1)
9516e66 verified
metadata
license: apache-2.0
language:
  - en
base_model:
  - Qwen/Qwen3-4B
pipeline_tag: text-ranking
tags:
  - finance
  - legal
  - code
  - stem
  - medical
library_name: sentence-transformers

Releasing zeroentropy/zerank-1-small

In search enginers, rerankers are crucial for improving the accuracy of your retrieval system.

This 1.7B reranker is the smaller version of our flagship model zeroentropy/zerank-1. Though the model is over 2x smaller, it maintains nearly the same standard of performance, continuing to outperform other popular rerankers, and displaying massive accuracy gains over traditional vector search.

We release this model under the open-source Apache 2.0 license, in order to support the open-source community and push the frontier of what's possible with open-source models.

How to Use

from sentence_transformers import CrossEncoder

model = CrossEncoder("zeroentropy/zerank-1-small", trust_remote_code=True)

query_documents = [
    ("What is 2+2?", "4"),
    ("What is 2+2?", "The answer is definitely 1 million"),
]

scores = model.predict(query_documents)

print(scores)

The model can also be inferenced using ZeroEntropy's /models/rerank endpoint.

Evaluations

NDCG@10 scores between zerank-1-small and competing closed-source proprietary rerankers. Since we are evaluating rerankers, OpenAI's text-embedding-3-small is used as an initial retriever for the Top 100 candidate documents.

Task Embedding cohere-rerank-v3.5 Salesforce/Llama-rank-v1 zerank-1-small zerank-1
Code 0.678 0.724 0.694 0.730 0.754
Conversational 0.250 0.571 0.484 0.556 0.596
Finance 0.839 0.824 0.828 0.861 0.894
Legal 0.703 0.804 0.767 0.817 0.821
Medical 0.619 0.750 0.719 0.773 0.796
STEM 0.401 0.510 0.595 0.680 0.694

Comparing BM25 and Hybrid Search without and with zerank-1-small:

Description Description