Leaderboard Dev
🏢
11
Dedicated display for RTEB benchmark results
Massive Text Embeddings Benchmark
HUME: Measuring the Human-Model Performance Gap in Text Embedding Task
Maintaining MTEB: Towards Long Term Usability and Reproducibility of Embedding Benchmarks
device=["cuda:0", "cuda:1"] or device=["cpu"]*4 on the model.predict or model.rank calls.dataset_id, e.g. dataset_id="lightonai/NanoBEIR-de" for the German benchmark.output_scores=True to get similarity scores returned. This can be useful for some distillation losses!