Safetensors
qwen3
Ucreate commited on
Commit
a3289a8
·
verified ·
1 Parent(s): a6bc53c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +134 -0
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ <div align="center">
6
+ <h1>ERank: Fusing Supervised Fine-Tuning and Reinforcement Learning for Effective and Efficient Text Reranking</h1>
7
+ </div>
8
+
9
+ <p align="center">
10
+ <a href="https://arxiv.org/abs/">Arxiv</a>&nbsp | &nbsp<a href="https://github.com/YZ-Cai/ERank">Github</a>
11
+ </p>
12
+
13
+ ## Introduction
14
+
15
+ We introduce ERANK, a highly effective and efficient pointwise reranker built from a reasoning LLM, which excels across diverse relevance scenarios with low latency.
16
+ Surprisingly, it also outperforms recent listwise rerankers on the most challenging reasoning-intensive tasks.
17
+
18
+ <img src="./assets/overview.png">
19
+
20
+ ERank is trained with a novel two-stage training pipeline, i.e., Supervised Fine-Tuning (SFT) and Reinforcement
21
+ Learning (RL).
22
+ During the SFT stage, unlike traidtional pointwise rerankers that train the LLMs for binary relevance classification, we encourage the LLM to generatively output fine grained integer scores.
23
+ In the RL training, we introduce a novel listwise derived reward, which instills global ranking awareness into the efficient
24
+ pointwise architecture.
25
+
26
+ ## Model List
27
+
28
+ We provide the trained reranking models in various sizes (4B, 14B and 32B), all of which support customizing the input instruction according to different tasks.
29
+
30
+ | Model | Size | Layers | Sequence Length | Instruction Aware |
31
+ |------------------------------------------|------|--------|-----------------|-------------------|
32
+ | [ERank-4B](https://huggingface.co/Alibaba-NLP/ERank-4B) | 4B | 36 | 32K | Yes |
33
+ | [ERank-14B](https://huggingface.co/Alibaba-NLP/ERank-14B) | 14B | 40 | 128K | Yes |
34
+ | [ERank-32B](https://huggingface.co/Alibaba-NLP/ERank-32B) | 32B | 64 | 128K | Yes |
35
+
36
+ ## Evaluation
37
+
38
+ We evaluate ERank on both reasoning-intensive benchmarks (BRIGHT and FollowIR) and traditional semantic relevance benchmarks (BEIR and TREC DL).
39
+ All methods use the original queries without hybrid scores.
40
+
41
+ | Paradigm | Method | Average | BRIGHT | FollowIR | BEIR | TREC DL |
42
+ | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
43
+ | - | First-stage retriever | 25.9 | 13.7 | 0 | 40.8 | 49.3 |
44
+ | Listwise | Rank-R1-7B | 34.6 | 15.7 | 3.6 | **49.0** | 70.0 |
45
+ | Listwise | Rearank-7B | 35.3 | 17.4 | 2.3 | **49.0** | **72.5** |
46
+ | Pointwise | JudgeRank-8B | 32.1 | 17.0 | 9.9 | 39.1 | 62.6 |
47
+ | Pointwise | Rank1-7B | 34.6 | 18.2 | 9.1 | 44.2 | 67.1 |
48
+ | Pointwise | **ERank-4B (Ours)** | 36.8 | 22.7 | 11.0 | 44.8 | 68.9 |
49
+ | Pointwise | **ERank-14B (Ours)** | 36.9 | 23.1 | 10.3 | 47.1 | 67.1 |
50
+ | Pointwise | **ERank-32B (Ours)** | **38.1** | **24.4** | **12.1** | 47.7 | 68.1 |
51
+
52
+ On the most challenging BRIGHT benchmark, with top-100 documents retrieved by ReasonIR-8B using GPT-4 reason-query, ERank with BM25 hybrid achieves the state-of-the-art NDCG@10.
53
+
54
+ | Method | nDCG@10 |
55
+ | :--- | :--- |
56
+ | ReasonIR-8B | 30.5 |
57
+ | Rank-R1-7B | 24.1 |
58
+ | Rank1-7B | 24.3 |
59
+ | Rearank-7B | 27.5 |
60
+ | JudgeRank-8B | 20.2 |
61
+ | *+ BM25 hybrid* | 22.7 |
62
+ | Rank-R1-32B-v0.2 | 37.7 |
63
+ | *+ BM25 hybrid* | 40.0 |
64
+ | **ERank-4B (Ours)** | 30.5 |
65
+ | *+ BM25 hybrid* | 38.7 |
66
+ | **ERank-14B (Ours)** | 31.8 |
67
+ | *+ BM25 hybrid* | 39.3 |
68
+ | **ERank-32B (Ours)** | 32.8 |
69
+ | *+ BM25 hybrid* | **40.2** |
70
+
71
+ Since ERank is a pointwise reranker, it has low latency compared with listwise models.
72
+
73
+ <div align="center">
74
+ <img src="./assets/latency.png" width=400px>
75
+ </div>
76
+
77
+ For more details, please refer to our [Paper](https://arxiv.org/abs/).
78
+
79
+ ## Usage
80
+
81
+ We have implemented the inference code based on Transformer and vLLM, respectively.
82
+
83
+ ```python
84
+ from examples.ERank_Transformer import ERank_Transformer
85
+ from examples.ERank_vLLM import ERank_vLLM
86
+ from examples.utils import hybrid_scores
87
+
88
+ # select a model
89
+ model_name_or_path = "Alibaba-NLP/ERank-4B"
90
+ # model_name_or_path = "Alibaba-NLP/ERank-14B"
91
+ # model_name_or_path = "Alibaba-NLP/ERank-32B"
92
+
93
+ # use vLLM or Transformer
94
+ # reranker = ERank_Transformer(model_name_or_path)
95
+ reranker = ERank_vLLM(model_name_or_path)
96
+
97
+ # input data
98
+ instruction = "Retrieve relevant documents for the query."
99
+ query = "I am happy"
100
+ docs = [
101
+ {"content": "excited", "first_stage_score": 46.7},
102
+ {"content": "sad", "first_stage_score": 1.5},
103
+ {"content": "peaceful", "first_stage_score": 2.3},
104
+ ]
105
+
106
+ # rerank
107
+ results = reranker.rerank(query, docs, instruction, truncate_length=2048)
108
+ print(results)
109
+ # [
110
+ # {'content': 'excited', 'first_stage_score': 46.7, 'rank_score': 4.84},
111
+ # {'content': 'peaceful', 'first_stage_score': 2.3, 'rank_score': 2.98}
112
+ # {'content': 'sad', 'first_stage_score': 1.5, 'rank_score': 0.0},
113
+ # ]
114
+
115
+ # Optional: hybrid with first-stage scores
116
+ alpha = 0.2
117
+ hybrid_results = hybrid_scores(results, alpha)
118
+ print(hybrid_results)
119
+ # [
120
+ # {'content': 'excited', 'first_stage_score': 46.7, 'rank_score': 4.84, 'hybrid_score': 1.18},
121
+ # {'content': 'peaceful', 'first_stage_score': 2.3, 'rank_score': 2.98, 'hybrid_score':0.01},
122
+ # {'content': 'sad', 'first_stage_score': 1.5, 'rank_score': 0.0, 'hybrid_score': -1.19}
123
+ # ]
124
+ ```
125
+
126
+ Please refer to the `examples` directory for details, in which we also provide the instructions used in the prompt during evaluation.
127
+
128
+
129
+ ## Citation
130
+ If you find our work helpful, feel free to give us a cite.
131
+
132
+ ```
133
+
134
+ ```