Update model metadata to set pipeline tag to the new `text-ranking` and library name to `sentence-transformers`
Browse filesHello!
## Pull Request overview
* Update metadata to set pipeline tag to the new `text-ranking`
* Update metadata to set library name to `sentence-transformers`
## Changes
This is an automated pull request to update the metadata of the model card. We recently introduced the [`text-ranking`](https://huggingface.co/models?pipeline_tag=text-ranking) pipeline tag for models that are used for ranking tasks, and we have a suspicion that this model is one of them. I also updated added metadata to specify that this model can be loaded with the `sentence-transformers` library, as it should be possible to load any model compatible with `transformers` `AutoModelForSequenceClassification`.
Feel free to verify that it works with the following:
```bash
pip install sentence-transformers
```
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder("BAAI/bge-reranker-large")
scores = model.predict([
("How many people live in Berlin?", "Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers."),
("How many people live in Berlin?", "Berlin is well known for its museums."),
])
print(scores)
```
Feel free to respond if you have questions or concerns.
- Tom Aarsen
@@ -5,14 +5,16 @@ language:
|
|
5 |
- zh
|
6 |
tags:
|
7 |
- mteb
|
|
|
|
|
8 |
model-index:
|
9 |
- name: bge-reranker-base
|
10 |
results:
|
11 |
- task:
|
12 |
type: Reranking
|
13 |
dataset:
|
14 |
-
type: C-MTEB/CMedQAv1-reranking
|
15 |
name: MTEB CMedQAv1
|
|
|
16 |
config: default
|
17 |
split: test
|
18 |
revision: None
|
@@ -24,8 +26,8 @@ model-index:
|
|
24 |
- task:
|
25 |
type: Reranking
|
26 |
dataset:
|
27 |
-
type: C-MTEB/CMedQAv2-reranking
|
28 |
name: MTEB CMedQAv2
|
|
|
29 |
config: default
|
30 |
split: test
|
31 |
revision: None
|
@@ -37,8 +39,8 @@ model-index:
|
|
37 |
- task:
|
38 |
type: Reranking
|
39 |
dataset:
|
40 |
-
type: C-MTEB/Mmarco-reranking
|
41 |
name: MTEB MMarcoReranking
|
|
|
42 |
config: default
|
43 |
split: dev
|
44 |
revision: None
|
@@ -50,8 +52,8 @@ model-index:
|
|
50 |
- task:
|
51 |
type: Reranking
|
52 |
dataset:
|
53 |
-
type: C-MTEB/T2Reranking
|
54 |
name: MTEB T2Reranking
|
|
|
55 |
config: default
|
56 |
split: dev
|
57 |
revision: None
|
@@ -60,7 +62,6 @@ model-index:
|
|
60 |
value: 67.27728847727172
|
61 |
- type: mrr
|
62 |
value: 77.1315192743764
|
63 |
-
pipeline_tag: feature-extraction
|
64 |
---
|
65 |
|
66 |
**We have updated the [new reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_reranker), supporting larger lengths, more languages, and achieving better performance.**
|
|
|
5 |
- zh
|
6 |
tags:
|
7 |
- mteb
|
8 |
+
pipeline_tag: text-ranking
|
9 |
+
library_name: sentence-transformers
|
10 |
model-index:
|
11 |
- name: bge-reranker-base
|
12 |
results:
|
13 |
- task:
|
14 |
type: Reranking
|
15 |
dataset:
|
|
|
16 |
name: MTEB CMedQAv1
|
17 |
+
type: C-MTEB/CMedQAv1-reranking
|
18 |
config: default
|
19 |
split: test
|
20 |
revision: None
|
|
|
26 |
- task:
|
27 |
type: Reranking
|
28 |
dataset:
|
|
|
29 |
name: MTEB CMedQAv2
|
30 |
+
type: C-MTEB/CMedQAv2-reranking
|
31 |
config: default
|
32 |
split: test
|
33 |
revision: None
|
|
|
39 |
- task:
|
40 |
type: Reranking
|
41 |
dataset:
|
|
|
42 |
name: MTEB MMarcoReranking
|
43 |
+
type: C-MTEB/Mmarco-reranking
|
44 |
config: default
|
45 |
split: dev
|
46 |
revision: None
|
|
|
52 |
- task:
|
53 |
type: Reranking
|
54 |
dataset:
|
|
|
55 |
name: MTEB T2Reranking
|
56 |
+
type: C-MTEB/T2Reranking
|
57 |
config: default
|
58 |
split: dev
|
59 |
revision: None
|
|
|
62 |
value: 67.27728847727172
|
63 |
- type: mrr
|
64 |
value: 77.1315192743764
|
|
|
65 |
---
|
66 |
|
67 |
**We have updated the [new reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_reranker), supporting larger lengths, more languages, and achieving better performance.**
|