Performance
#1
by
tomaarsen
HF staff
- opened
Hello!
Nice work! I was quite curious, so I ran this model on 2 Portuguese datasets from MTEB and compared it against multilingual-e5-small
. These are my findings:
BelebeleRetrieval (por) | MintakaRetrieval (por) | |
---|---|---|
cnmoro/static-retrieval-distilbert-ptbr | 0.79412 NDCG@10 | 0.19006 NDCG@10 |
intfloat/multilingual-e5-small | 0.91068 NDCG@10 | 0.22553 NDCG@10 |
That's super impressive! You get really close (87% and 84%) to this transformers-based model.
- Tom Aarsen
Thanks! I've been working on embedding models lately, and when I saw your article I knew I had to test it out :)