Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
tags:
|
| 6 |
+
- reranker
|
| 7 |
+
- retrieval
|
| 8 |
+
- runs
|
| 9 |
+
- trec
|
| 10 |
+
- information-retrieval
|
| 11 |
+
- rank1
|
| 12 |
+
- benchmark
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# rank1-run-files: Pre-computed Run Files for Reranking Evaluation
|
| 16 |
+
|
| 17 |
+
📄 [Paper](https://arxiv.org/abs/2502.18418) | 🚀 [GitHub Repository](https://github.com/orionw/rank1)
|
| 18 |
+
|
| 19 |
+
This dataset contains pre-computed run files used by the rank1 family of models on various retrieval benchmarks. These files are what were used for top-k rereranking and also include the re-annotated DL19 qrels. These files are needed to download to reproduce our results.
|
| 20 |
+
|
| 21 |
+
## Benchmarks Included
|
| 22 |
+
|
| 23 |
+
The dataset includes run files for the following benchmarks:
|
| 24 |
+
|
| 25 |
+
- BEIR (multiple datasets including NFCorpus, SciFact, etc.)
|
| 26 |
+
- NevIR
|
| 27 |
+
- TREC-DL 2019
|
| 28 |
+
- BRIGHT
|
| 29 |
+
|
| 30 |
+
## Associated Models and Resources
|
| 31 |
+
|
| 32 |
+
| Resource | Description |
|
| 33 |
+
|:---------|:------------|
|
| 34 |
+
| [rank1-7b](https://huggingface.co/jhu-clsp/rank1-7b) | Base rank1 model (7B parameters) |
|
| 35 |
+
| [rank1-14b](https://huggingface.co/jhu-clsp/rank1-14b) | Larger rank1 variant (14B parameters) |
|
| 36 |
+
| [rank1-32b](https://huggingface.co/jhu-clsp/rank1-32b) | Largest rank1 variant (32B parameters) |
|
| 37 |
+
| [rank1-mistral-2501-24b](https://huggingface.co/jhu-clsp/rank1-mistral-2501-24b) | Mistral-based rank1 variant (24B parameters) |
|
| 38 |
+
| [rank1-llama3-8b](https://huggingface.co/jhu-clsp/rank1-llama3-8b) | Llama 3.1-based rank1 variant (8B parameters) |
|
| 39 |
+
| [rank1-r1-msmarco](https://huggingface.co/datasets/jhu-clsp/rank1-r1-msmarco) | All R1 output examples from MS MARCO |
|
| 40 |
+
| [rank1-training-data](https://huggingface.co/datasets/jhu-clsp/rank1-training-data) | Training data used for rank1 models |
|
| 41 |
+
|
| 42 |
+
## Citation
|
| 43 |
+
|
| 44 |
+
If you use these run files in your research, please cite:
|
| 45 |
+
|
| 46 |
+
```bibtex
|
| 47 |
+
@misc{weller2025rank1testtimecomputereranking,
|
| 48 |
+
title={Rank1: Test-Time Compute for Reranking in Information Retrieval},
|
| 49 |
+
author={Orion Weller and Kathryn Ricci and Eugene Yang and Andrew Yates and Dawn Lawrie and Benjamin Van Durme},
|
| 50 |
+
year={2025},
|
| 51 |
+
eprint={2502.18418},
|
| 52 |
+
archivePrefix={arXiv},
|
| 53 |
+
primaryClass={cs.IR},
|
| 54 |
+
url={https://arxiv.org/abs/2502.18418},
|
| 55 |
+
}
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
## License
|
| 59 |
+
|
| 60 |
+
[MIT License](https://github.com/orionw/rank1/blob/main/LICENSE)
|